Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
There was a problem hiding this comment.
Actionable comments posted: 26
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@mlm_meta/Cargo.toml`:
- Around line 9-12: The Cargo manifest currently pulls both native TLS and
rustls; fix this by switching the reqwest dependency to disable default features
and enable rustls-only, and remove or stop using the vendored openssl
dependency: change the reqwest entry to reqwest = { version = "0.11",
default-features = false, features = ["json","gzip","rustls-tls"] } and delete
(or mark optional/unused) the openssl = { version = "0.10.73", features =
["vendored"] } line so only the rustls TLS stack is included.
In `@mlm_meta/README.md`:
- Around line 4-6: The README text referencing example providers is out of date:
update the brief crate description that mentions "Goodreads" to instead list the
actual implemented providers (Hardcover, OpenLibrary, RomanceIo) and ensure any
example usages align with the Provider trait and helper types defined in this
crate (e.g., references to Provider, Metadata, and provider-specific types);
replace the old provider name and adjust example snippets to show usage for
Hardcover/OpenLibrary/RomanceIo so new contributors see correct, current
examples.
In `@mlm_meta/src/helpers.rs`:
- Around line 87-90: The code currently calls normalize_authors(q_auths) inside
the per-candidate scoring loop (see q_auths and normalize_authors) causing
repeated allocations; refactor so q_auths is normalized once before iterating
candidates and then pass the precomputed q_auths_norm into the per-candidate
scorer (e.g., change score_candidate to accept a &q_auths_norm or provide an
overload/use closure capturing q_auths_norm) and remove the normalize_authors
call from within score_candidate to avoid recomputation.
In `@mlm_meta/src/http.rs`:
- Around line 60-67: The Client::builder().build().unwrap() can panic; modify
the constructor (the new() function that creates Self) to return Result<Self,
reqwest::Error> (or a suitable error type) and replace unwrap() with the
fallible build()? call: let client = Client::builder()...build()?; then return
Ok(Self { client }); ensuring callers handle the propagated error instead of
allowing a panic from unwrap().
- Around line 79-82: The get function currently returns raw reqwest errors
without URL context; update async fn get(&self, url: &str) -> Result<String> to
attach context to the network calls (the self.client.get(url).send().await
and/or the .text().await) so failures include which URL failed—for example, use
anyhow::Context (or map_err with a closure) to .with_context(|| format!("failed
to send GET request to {}", url)) and .with_context(|| format!("failed to read
response body from {}", url)) around the await points; reference the get
function and the self.client.get(url).send().await and .text().await calls when
applying the change.
In `@mlm_meta/src/providers/hardcover.rs`:
- Around line 66-83: The code silently ignores GraphQL errors because
parse_results(&self, v: &serde_json::Value) proceeds straight to extracting
hits; update the flow to explicitly check v.get("errors") (and if it's a
non-empty array or object) and surface that as an error instead of returning
empty results—either by changing parse_results to return
Result<Vec<serde_json::Value>, anyhow::Error> and returning an
anyhow::anyhow!("GraphQL errors: {}", errors) when present, or by adding the
check immediately after the JSON parse function (the function that returns
serde_json::Value) and propagating a descriptive error that includes the errors
payload so callers don't silently continue.
In `@mlm_meta/src/providers/openlibrary.rs`:
- Around line 102-105: The current conversion of edition_count uses an unchecked
cast that can wrap invalid i64 values; change the mapping on the
result.get("edition_count") chain to perform a checked conversion via
u32::try_from(e).ok() (or e.try_into().ok()) instead of map(|e| e as u32) so
negative or out‑of‑range i64 values yield None; update the code around the
edition_count binding (the get("edition_count")/.and_then(|e| e.as_i64())
mapping) to use the checked conversion.
In `@mlm_meta/src/providers/romanceio.rs`:
- Around line 54-67: The authors extraction only handles when book.get("author")
is an array, so single-object authors are ignored; update the authors
construction in romanceio.rs (the let authors: Vec<String> = ... block) to
accept both an array and a single object by checking book.get("author") and if
it is an array iterate as before, otherwise if it is an object read its "name"
field and push it into the Vec<String>; ensure the same name-extraction logic
(get("name").and_then(|n| n.as_str()).map(|s| s.to_string())) is reused and keep
the unwrap_or_default() fallback so downstream fetch() author verification still
runs.
In `@mlm_meta/tests/hardcover_tests.rs`:
- Around line 12-14: The MockClient uses std::sync::Mutex for its resps field
which can block async runtimes; change resps to tokio::sync::Mutex<Vec<String>>
(and add the tokio::sync::Mutex import) and update any places that acquire the
lock (e.g., where MockClient::resps.lock() is called) to use .lock().await so
locking is async-friendly and non-blocking; ensure signatures using the lock
become async where necessary (e.g., test helpers or MockClient methods that
await the mutex).
- Around line 36-40: The mock currently returns Ok(String::new()) when
self.resps is empty, which silently hides unexpected calls; change the method
that accesses self.resps (the block using let mut guard =
self.resps.lock().unwrap(), guard.is_empty(), and guard.remove(0)) to return an
Err (e.g., a descriptive error via anyhow::Error or a custom error type) instead
of Ok("") when guard.is_empty() so tests will fail on unexpected extra calls;
update the method's signature to return Result<String, E> if needed and adjust
any callers in tests to handle or expect the error.
In `@mlm_meta/tests/mock_openlibrary.rs`:
- Around line 38-40: The test helper currently performs blocking I/O by calling
std::fs::read_to_string inside an async context (the call site around
resolve_plan_file and the surrounding function in tests/mock_openlibrary.rs);
replace the blocking read with an async-aware operation (e.g., use
tokio::fs::read_to_string or tokio::task::spawn_blocking) and adjust the
surrounding function signature to async if needed so the await can be used,
preserving the existing error mapping (map_err(|e| anyhow::anyhow!(e))) and
return type; ensure the call sites that use resolve_plan_file are updated to
await the result if you make it async.
In `@mlm_meta/tests/openlibrary_tests.rs`:
- Around line 90-118: The test openlibrary_no_results is too permissive: tighten
EmptyClient (impl HttpClient) to validate the incoming GET URL expected by
OpenLibrary::with_client and return a distinct error for unexpected URLs, then
call prov.fetch(&query_meta).await and assert the returned Err is specifically
the no-results case (e.g., match the error message/variant produced by
OpenLibrary when numFound == 0) rather than any error; update assertions to
check the error text/variant and ensure post() still bails for unexpected calls
so unrelated parser/client failures fail the test.
In `@mlm_meta/tests/scoring_tests.rs`:
- Around line 46-49: The test contains a duplicate assertion for meta.title;
remove the redundant assert!(!meta.title.is_empty()) so only one assertion
checks that meta.title is not empty and retain the subsequent
assert!(!meta.authors.is_empty()) to verify authors; update the test in
scoring_tests.rs where the variable meta is asserted.
In `@server/src/config.rs`:
- Around line 22-30: HardcoverConfig currently derives Debug and can leak the
api_key when logged; remove Debug from the derive list and implement a custom
fmt::Debug for HardcoverConfig that prints all fields as before but redacts
api_key (e.g., display "***REDACTED***" or None), keeping Clone and Deserialize
derives and the existing serde attributes; update the impl to reference the
struct HardcoverConfig and its api_key field so any "{:?}" output never exposes
the actual key.
In `@server/src/linker/common.rs`:
- Around line 212-219: Extract the duplicated test helper by moving the struct F
and its HasFileName impl into the outer test module: add struct F { name: String
} and impl HasFileName for F { fn name_lower(&self) -> String { ... } } once
inside mod tests (keeping the cfg(test) scope and use super::*), then remove the
local struct F and impl blocks from each individual test function so all six
tests reuse the shared helper.
In `@server/src/linker/duplicates.rs`:
- Around line 157-160: Replace the manual temp-dir creation and cleanup that
uses std::env::temp_dir(), std::process::id(), fs::remove_dir_all and
fs::create_dir_all with a tempfile::TempDir to guarantee unique, automatically
cleaned temporary directories; specifically, change the code that assigns and
uses tmp_dir (and the similar block at the later occurrence around lines
182-183) to create a TempDir via tempfile::TempDir::new() or tempfile::Builder,
use TempDir.path() where the current tmp_dir is referenced, and remove the
explicit fs::remove_dir_all/fs::create_dir_all calls so the TempDir drops and
cleans up automatically even on early returns or panics.
- Around line 120-124: The test helper create_test_config() relies on
Config::default() ordering for audio preference, which couples
test_rank_torrents_preference to default_audio_types(); explicitly set
Config.audio_types in create_test_config() (e.g., ["m4b", "mp3"]) so the test
asserts ranking logic, not default preferences—update the create_test_config()
function to populate audio_types accordingly while keeping mam_id and the rest
via Default::default().
In `@server/src/linker/torrent.rs`:
- Around line 720-721: Locate the inline comment that reads "Removed temporary
debug prints that were used during investigation." in
server/src/linker/torrent.rs and either delete it (preferred) or replace it with
a brief, specific note explaining the investigation and any relevant findings or
rationale for removing the prints so future readers understand the context;
ensure the updated comment is concise and actionable or remove it entirely if it
adds no value.
In `@server/src/main.rs`:
- Line 214: The hard-coded metadata timeout set via let default_timeout =
Duration::from_secs(5); should be replaced with a config-backed value; read the
timeout from your configuration (e.g., a settings struct or env var) and parse
it into a Duration (falling back to a sensible default if not provided), then
assign that to default_timeout instead of calling Duration::from_secs(5); update
any config struct or loader used in main (or the Settings/Config type) so the
key (e.g., metadata_timeout_secs or metadata_timeout) is present and documented.
In `@server/src/metadata/mod.rs`:
- Around line 105-136: The current code awaits provider JoinHandles in insertion
order (the for h in handles loop) which blocks on slower providers and leaves
remaining tasks running after break; replace the Vec<JoinHandle> approach with a
tokio::task::JoinSet to spawn the provider tasks (wrapping timeout(p.fetch(&q))
as before), then use join_next() to await tasks as they complete and pick the
first Ok(Ok((id, meta))) result; once a successful result is found call
join_set.abort_all() to cancel remaining tasks and avoid detached background
work, and handle join_next() errors similarly to the existing tracing::debug
paths so timeouts and panics are logged.
In `@server/src/web/pages/torrent.rs`:
- Around line 391-393: The Err(e) arm that currently only logs "metadata match
failed for provider {}: {}" must not swallow the failure: in the handler around
the metadata match/provider fetch (the Err branch referencing provider_id and e
in torrent.rs) either propagate the error (return Err(...) from the surrounding
function) or produce an appropriate HTTP error response/redirect with an error
message so callers/users see the failure instead of continuing with a success
redirect; update the Err(e) branch to return or set a failure response (e.g.,
return Err(anyhow::Error::from(e)) or construct an HTTP error/flash and avoid
the success redirect) and ensure any upstream call sites handle that error.
In `@server/templates/pages/torrent.html`:
- Around line 42-45: The handler currently logs errors from fetch_provider()
(and related timeout/validation errors) then continues to return Ok(Redirect),
hiding failures; change the code that calls fetch_provider() (and the metadata
matching flow that currently logs at line 392 and returns Ok(Redirect) at line
484) to propagate errors as HTTP errors instead of swallowing them: replace the
log+continue pattern with proper error propagation (use the ? operator or
map_err to convert the error into an appropriate HTTP error type such as
actix_web::error::ErrorBadRequest/ErrorRequestTimeout or a status::Custom) so
unknown provider, timeout, or validation failures return a non-2xx response to
the client. Ensure you do this in the same request handler/function that
performs the redirect so the early return on error stops further processing.
In `@server/tests/common/mod.rs`:
- Around line 275-282: The shared mock_config currently enables RomanceIo by
setting metadata_providers to include mlm::config::ProviderConfig::RomanceIo
with mlm::config::RomanceIoConfig, which couples unrelated tests to that
provider; change mock_config to leave metadata_providers empty by default (or
only populated when a test explicitly opts in) and remove the RomanceIo entry so
tests must explicitly add ProviderConfig::RomanceIo / RomanceIoConfig when they
need it.
In `@server/tests/metadata_integration.rs`:
- Around line 15-77: The test defines MockFetcher which duplicates MockClient in
mlm_meta/tests/mock_fetcher.rs; extract the shared mock into a common test
utility (e.g., a pub(crate) tests::mock_fetcher module in mlm_meta or a
test-utils crate) and replace the local MockFetcher with an import of that
shared MockClient/MockFetcher; specifically move the async_trait impl and helper
responses into the shared module, export the struct (MockClient or MockFetcher)
and reuse it in server/tests/metadata_integration.rs by updating the use/import
and removing the duplicated implementation so both tests call the same shared
mock implementation.
- Around line 163-167: The current assertion on meta (meta.title,
meta.categories, meta.tags) is too permissive because it allows the test to pass
if only one condition is true; update the test to assert the specific expected
fields explicitly: verify meta.title contains the expected substring (e.g.,
"ink") AND assert meta.categories is not empty and meta.tags is not empty (or
assert exact expected category/tag values if known). Locate the assertion
referencing meta.title, meta.categories, and meta.tags and replace the ||
combined check with separate assertions (or a single combined && check) so
failures in category/tag population cannot be masked by the title condition.
- Around line 91-120: The test builds a MetadataService via
MetadataService::from_settings into the local variable metadata but then
immediately overwrites it with a mocked instance later; remove the redundant
construction by deleting the initial MetadataService::from_settings(...) call
and its Arc wrapping (the earlier provider_settings and _default_timeout setup
can remain if used), or alternatively reuse that instance instead of replacing
it—locate the MetadataService::from_settings(...) and the let metadata =
Arc::new(metadata) lines and remove or adjust them so the mock fetcher
assignment is the only MetadataService construction.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (43)
Cargo.tomlmlm_db/src/lib.rsmlm_db/src/v03.rsmlm_db/src/v09.rsmlm_db/src/v13.rsmlm_db/src/v18.rsmlm_meta/Cargo.tomlmlm_meta/README.mdmlm_meta/src/helpers.rsmlm_meta/src/http.rsmlm_meta/src/lib.rsmlm_meta/src/providers/fake.rsmlm_meta/src/providers/hardcover.rsmlm_meta/src/providers/mod.rsmlm_meta/src/providers/openlibrary.rsmlm_meta/src/providers/romanceio.rsmlm_meta/src/traits.rsmlm_meta/tests/hardcover_tests.rsmlm_meta/tests/mock_fetcher.rsmlm_meta/tests/mock_openlibrary.rsmlm_meta/tests/openlibrary_tests.rsmlm_meta/tests/provider_tests.rsmlm_meta/tests/romanceio_tests.rsmlm_meta/tests/scoring_tests.rsserver/Cargo.tomlserver/src/autograbber.rsserver/src/config.rsserver/src/config_impl.rsserver/src/lib.rsserver/src/linker/common.rsserver/src/linker/duplicates.rsserver/src/linker/torrent.rsserver/src/main.rsserver/src/metadata/mam_meta.rsserver/src/metadata/mod.rsserver/src/stats.rsserver/src/web/pages/torrent.rsserver/src/web/tables.rsserver/templates/pages/torrent.htmlserver/tests/cleaner_test.rsserver/tests/common/mod.rsserver/tests/linker_torrent_test.rsserver/tests/metadata_integration.rs
mlm_meta/Cargo.toml
Outdated
| openssl = { version = "0.10.73", features = ["vendored"] } | ||
| serde = { version = "1.0", features = ["derive"] } | ||
| reqwest = { version = "0.11", features = ["json", "gzip", "rustls-tls"] } | ||
| tokio = { version = "1", features = ["rt-multi-thread", "macros"] } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== TLS-related deps in manifests =="
fd Cargo.toml | xargs -I{} sh -c 'echo "--- {}"; rg -n "reqwest|openssl|rustls-tls|default-features|native-tls" "{}" || true'
echo
echo "== Direct openssl usage in mlm_meta source =="
rg -n --type rust 'use\s+openssl|openssl::' mlm_meta || trueRepository: StirlingMouse/MLM
Length of output: 685
🌐 Web query:
reqwest 0.11 default features cargo dependencies
💡 Result:
Reqwest 0.11 default features (what you get if you write reqwest = "0.11")
In reqwest 0.11.x, Cargo enables the crate’s default feature set automatically unless you turn it off with default-features = false [3]. The features enabled by default are [1][2]:
default-tls(default)hyper-tls(default)native-tls-crate(default)tokio-native-tls(default)
Practically, that means “reqwest = "0.11"” pulls in the native/system TLS stack via the default-tls path (which commonly brings in OpenSSL on many Linux setups, for example).
Cargo.toml examples
Default behavior (default TLS stack enabled):
[dependencies]
reqwest = "0.11"(Default features above are enabled automatically.) [1][2][3]
Disable all defaults (to control dependency footprint / TLS choice):
[dependencies]
reqwest = { version = "0.11", default-features = false }(Disables the default feature set.) [3]
Common minimal explicit setup (example: rustls + json, no default TLS):
[dependencies]
reqwest = { version = "0.11", default-features = false, features = ["rustls-tls", "json"] }(rustls-tls is an optional TLS backend feature in 0.11.x.) [2][3]
Tip to see the actual resolved dependency tree in your project: run cargo tree (especially useful because features are additive across the whole graph) [3].
Sources: reqwest 0.11.0 feature list [1], reqwest 0.11.7 feature list [2], Cargo feature defaults behavior [3].
Unify TLS backend selection to avoid pulling multiple stacks.
reqwest 0.11 enables default-tls by default, which brings in native TLS regardless of the rustls-tls feature. Combined with the unused vendored openssl dependency, this unnecessarily bloats the dependency tree with dual TLS stacks.
Suggested manifest adjustment (rustls-only path)
[dependencies]
anyhow = "1.0"
async-trait = "0.1"
-openssl = { version = "0.10.73", features = ["vendored"] }
serde = { version = "1.0", features = ["derive"] }
-reqwest = { version = "0.11", features = ["json", "gzip", "rustls-tls"] }
+reqwest = { version = "0.11", default-features = false, features = ["json", "gzip", "rustls-tls"] }
tokio = { version = "1", features = ["rt-multi-thread", "macros"] }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| openssl = { version = "0.10.73", features = ["vendored"] } | |
| serde = { version = "1.0", features = ["derive"] } | |
| reqwest = { version = "0.11", features = ["json", "gzip", "rustls-tls"] } | |
| tokio = { version = "1", features = ["rt-multi-thread", "macros"] } | |
| serde = { version = "1.0", features = ["derive"] } | |
| reqwest = { version = "0.11", default-features = false, features = ["json", "gzip", "rustls-tls"] } | |
| tokio = { version = "1", features = ["rt-multi-thread", "macros"] } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_meta/Cargo.toml` around lines 9 - 12, The Cargo manifest currently pulls
both native TLS and rustls; fix this by switching the reqwest dependency to
disable default features and enable rustls-only, and remove or stop using the
vendored openssl dependency: change the reqwest entry to reqwest = { version =
"0.11", default-features = false, features = ["json","gzip","rustls-tls"] } and
delete (or mark optional/unused) the openssl = { version = "0.10.73", features =
["vendored"] } line so only the rustls TLS stack is included.
| Small crate defining the Provider trait and helper types for external | ||
| metadata providers (Goodreads, Hardcover, ...). | ||
|
|
There was a problem hiding this comment.
Update provider examples to match implemented providers.
Line 5 still cites Goodreads, but this PR wires Hardcover/OpenLibrary/RomanceIo. Keeping this aligned avoids confusion for new provider contributors.
📝 Suggested doc update
-Small crate defining the Provider trait and helper types for external
-metadata providers (Goodreads, Hardcover, ...).
+Small crate defining the Provider trait and helper types for external
+metadata providers (Hardcover, OpenLibrary, RomanceIo, ...).📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Small crate defining the Provider trait and helper types for external | |
| metadata providers (Goodreads, Hardcover, ...). | |
| Small crate defining the Provider trait and helper types for external | |
| metadata providers (Hardcover, OpenLibrary, RomanceIo, ...). |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_meta/README.md` around lines 4 - 6, The README text referencing example
providers is out of date: update the brief crate description that mentions
"Goodreads" to instead list the actual implemented providers (Hardcover,
OpenLibrary, RomanceIo) and ensure any example usages align with the Provider
trait and helper types defined in this crate (e.g., references to Provider,
Metadata, and provider-specific types); replace the old provider name and adjust
example snippets to show usage for Hardcover/OpenLibrary/RomanceIo so new
contributors see correct, current examples.
| let mut author_score = 0.0f64; | ||
| if !q_auths.is_empty() { | ||
| let q_auths_norm = normalize_authors(q_auths); | ||
| let mut best = 0.0f64; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Avoid repeated query-author normalization inside per-candidate scoring.
Line 89 recomputes normalized query authors on every score_candidate call. For batch candidate scoring, pre-normalizing once per query will reduce repeated allocations/work.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_meta/src/helpers.rs` around lines 87 - 90, The code currently calls
normalize_authors(q_auths) inside the per-candidate scoring loop (see q_auths
and normalize_authors) causing repeated allocations; refactor so q_auths is
normalized once before iterating candidates and then pass the precomputed
q_auths_norm into the per-candidate scorer (e.g., change score_candidate to
accept a &q_auths_norm or provide an overload/use closure capturing
q_auths_norm) and remove the normalize_authors call from within score_candidate
to avoid recomputation.
| Self { | ||
| client: Client::builder() | ||
| .user_agent("Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/143.0.0.0 Safari/537.36") | ||
| .default_headers(headers) | ||
| .gzip(true) | ||
| .build() | ||
| .unwrap() | ||
| } |
There was a problem hiding this comment.
Potential panic on client construction.
Client::builder().build().unwrap() can panic if the TLS backend fails to initialize (e.g., missing system certificates in constrained environments). Consider using ? or returning a Result from new().
🛡️ Proposed fix
-impl ReqwestClient {
- pub fn new() -> Self {
+impl ReqwestClient {
+ pub fn new() -> Self {
+ Self::try_new().expect("failed to build HTTP client")
+ }
+
+ pub fn try_new() -> Result<Self> {
// ... header setup ...
- Self {
- client: Client::builder()
+ Ok(Self {
+ client: Client::builder()
.user_agent("Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/143.0.0.0 Safari/537.36")
.default_headers(headers)
.gzip(true)
- .build()
- .unwrap()
- }
+ .build()?
+ })
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_meta/src/http.rs` around lines 60 - 67, The
Client::builder().build().unwrap() can panic; modify the constructor (the new()
function that creates Self) to return Result<Self, reqwest::Error> (or a
suitable error type) and replace unwrap() with the fallible build()? call: let
client = Client::builder()...build()?; then return Ok(Self { client }); ensuring
callers handle the propagated error instead of allowing a panic from unwrap().
| async fn get(&self, url: &str) -> Result<String> { | ||
| let res = self.client.get(url).send().await?.text().await?; | ||
| Ok(res) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding error context for debugging.
Adding context to HTTP errors helps identify which URL failed when debugging production issues.
♻️ Proposed improvement
+use anyhow::Context;
+
async fn get(&self, url: &str) -> Result<String> {
- let res = self.client.get(url).send().await?.text().await?;
+ let res = self.client.get(url).send().await
+ .with_context(|| format!("GET request failed: {}", url))?
+ .text().await
+ .with_context(|| format!("failed to read response body from: {}", url))?;
Ok(res)
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_meta/src/http.rs` around lines 79 - 82, The get function currently
returns raw reqwest errors without URL context; update async fn get(&self, url:
&str) -> Result<String> to attach context to the network calls (the
self.client.get(url).send().await and/or the .text().await) so failures include
which URL failed—for example, use anyhow::Context (or map_err with a closure) to
.with_context(|| format!("failed to send GET request to {}", url)) and
.with_context(|| format!("failed to read response body from {}", url)) around
the await points; reference the get function and the
self.client.get(url).send().await and .text().await calls when applying the
change.
| <form method=post> | ||
| <input type=hidden name=provider value="{{ provider }}"> | ||
| <button name=action value=match>Match meta ({{ provider }})</button> | ||
| </form> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Find torrent handler file
handler=$(fd '^torrent\.rs$' server/src/web/pages 2>/dev/null | head -n1)
if [ -z "$handler" ]; then
echo "ERROR: Could not find torrent.rs" >&2
exit 1
fi
echo "=== Handler file found: $handler ==="
echo
# Check file size before reading
lines=$(wc -l < "$handler")
echo "File size: $lines lines"
echo
# Look for form struct and action handler
echo "=== Looking for form struct and action handling ==="
rg -n -A5 'struct.*TorrentPageForm|action.*provider|match.*provider' "$handler" | head -100
echo
# Look for provider validation / allowlist
echo "=== Looking for provider validation ==="
rg -n 'provider.*from_str|provider.*parse|provider.*match|metadata_providers|PROVIDERS|Provider::' "$handler" | head -50
echo
# Search broader in server/src for provider validation patterns
echo "=== Searching server/src for allowlist/validation patterns ==="
rg -n 'metadata_providers|supported.*provider|allowed.*provider|Provider::' server/src --type rs | head -50Repository: StirlingMouse/MLM
Length of output: 1628
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Find match_meta function definition
echo "=== Searching for match_meta function ==="
rg -n 'fn match_meta|async fn match_meta' server/src --max-count 5
echo
echo "=== Looking at match_meta implementation ==="
rg -n -A20 'async fn match_meta' server/src | head -80
echo
echo "=== Checking if provider validation happens in metadata module ==="
fd metadata server/src -type f | head -5
echo
echo "=== Looking for Provider enum or validation patterns ==="
rg -n 'enum Provider|impl.*Provider|Provider::' server/src | head -30Repository: StirlingMouse/MLM
Length of output: 2101
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Searching for fetch_provider function ==="
rg -n 'fn fetch_provider|async fn fetch_provider' server/src
echo
echo "=== Looking at fetch_provider implementation ==="
rg -n -A25 'async fn fetch_provider' server/src | head -100
echo
echo "=== Searching for MetadataService struct ==="
rg -n 'struct MetadataService|impl MetadataService' server/src
echo
echo "=== Looking for provider validation patterns (get, contains, lookup) ==="
rg -n 'providers\.get|providers\.contains|provider_map|get.*provider' server/src/metadata | head -30Repository: StirlingMouse/MLM
Length of output: 2092
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Checking error handling in match action (lines 358-395) ==="
sed -n '350,400p' server/src/web/pages/torrent.rs
echo
echo "=== Looking at context around error handling ==="
rg -n -B5 -A10 'tracing::error.*metadata match failed' server/src/web/pages/torrent.rs
echo
echo "=== Checking the full action match block return ==="
sed -n '355,400p' server/src/web/pages/torrent.rsRepository: StirlingMouse/MLM
Length of output: 5791
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Looking at function signature and full return handling ==="
rg -n -B10 'pub async fn page' server/src/web/pages/torrent.rs | head -40
echo
echo "=== Checking what happens after match action block ==="
sed -n '340,420p' server/src/web/pages/torrent.rs
echo
echo "=== Looking for implicit Ok returns ==="
tail -50 server/src/web/pages/torrent.rsRepository: StirlingMouse/MLM
Length of output: 123
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Finding function containing match action ==="
rg -n -B20 '"match" =>' server/src/web/pages/torrent.rs | grep -E '(fn |async fn |-> )' | head -10
echo
echo "=== Full context around match action and after ==="
sed -n '320,430p' server/src/web/pages/torrent.rsRepository: StirlingMouse/MLM
Length of output: 109
🏁 Script executed:
#!/bin/bash
echo "=== Reading file from line 300 to 450 ==="
sed -n '300,450p' server/src/web/pages/torrent.rs
echo
echo "=== Counting lines in file ==="
wc -l server/src/web/pages/torrent.rs
echo
echo "=== Grepping for function definition near match action ==="
grep -n "fn\|async fn" server/src/web/pages/torrent.rs | grep -B1 -A1 "^[0-9]*.*fn"Repository: StirlingMouse/MLM
Length of output: 6825
🏁 Script executed:
#!/bin/bash
echo "=== Confirming final return of torrent_page_post_id ==="
sed -n '500,530p' server/src/web/pages/torrent.rsRepository: StirlingMouse/MLM
Length of output: 1018
🏁 Script executed:
#!/bin/bash
echo "=== Looking at end of match action and function return ==="
sed -n '390,410p' server/src/web/pages/torrent.rs
echo
echo "=== Searching for final Ok/Err return in torrent_page_post_id ==="
grep -n "Ok(Redirect" server/src/web/pages/torrent.rs | head -5Repository: StirlingMouse/MLM
Length of output: 1079
Return validation errors to user instead of silently failing.
The fetch_provider() validation correctly rejects unknown providers (lines 390 in mod.rs), but the error is only logged at line 392—it's never returned. The function continues and redirects with Ok(Redirect) at line 484, making the operation appear successful when metadata matching fails. Propagate validation errors (unknown provider, timeout, etc.) as HTTP errors instead of silent failures.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/templates/pages/torrent.html` around lines 42 - 45, The handler
currently logs errors from fetch_provider() (and related timeout/validation
errors) then continues to return Ok(Redirect), hiding failures; change the code
that calls fetch_provider() (and the metadata matching flow that currently logs
at line 392 and returns Ok(Redirect) at line 484) to propagate errors as HTTP
errors instead of swallowing them: replace the log+continue pattern with proper
error propagation (use the ? operator or map_err to convert the error into an
appropriate HTTP error type such as
actix_web::error::ErrorBadRequest/ErrorRequestTimeout or a status::Custom) so
unknown provider, timeout, or validation failures return a non-2xx response to
the client. Ensure you do this in the same request handler/function that
performs the redirect so the early return on error stops further processing.
| metadata_providers: vec![mlm::config::ProviderConfig::RomanceIo( | ||
| mlm::config::RomanceIoConfig { | ||
| enabled: true, | ||
| timeout_secs: None, | ||
| }, | ||
| )], | ||
| ..Default::default() | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Prefer provider opt-in in shared mock_config.
Line 275 enables RomanceIo for every test using this helper. Keeping metadata_providers empty by default avoids coupling unrelated tests to provider behavior.
♻️ Suggested change
- metadata_providers: vec![mlm::config::ProviderConfig::RomanceIo(
- mlm::config::RomanceIoConfig {
- enabled: true,
- timeout_secs: None,
- },
- )],
+ metadata_providers: vec![],📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| metadata_providers: vec![mlm::config::ProviderConfig::RomanceIo( | |
| mlm::config::RomanceIoConfig { | |
| enabled: true, | |
| timeout_secs: None, | |
| }, | |
| )], | |
| ..Default::default() | |
| } | |
| metadata_providers: vec![], | |
| ..Default::default() | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/tests/common/mod.rs` around lines 275 - 282, The shared mock_config
currently enables RomanceIo by setting metadata_providers to include
mlm::config::ProviderConfig::RomanceIo with mlm::config::RomanceIoConfig, which
couples unrelated tests to that provider; change mock_config to leave
metadata_providers empty by default (or only populated when a test explicitly
opts in) and remove the RomanceIo entry so tests must explicitly add
ProviderConfig::RomanceIo / RomanceIoConfig when they need it.
| // Simple mock fetcher that returns inline mock data for tests. | ||
| struct MockFetcher; | ||
|
|
||
| #[async_trait] | ||
| impl mlm_meta::http::HttpClient for MockFetcher { | ||
| async fn get(&self, url: &str) -> anyhow::Result<String> { | ||
| let u = Url::parse(url).map_err(|e| anyhow::anyhow!(e))?; | ||
| if !u.host_str().is_some_and(|h| h.contains("romance.io")) { | ||
| return Err(anyhow::anyhow!("unexpected host in test fetch")); | ||
| } | ||
| if u.path().starts_with("/json/search_books") { | ||
| return Ok(r#"{ | ||
| "success": true, | ||
| "books": [ | ||
| { | ||
| "_id":"68b95a390bc0cee156edaf2b", | ||
| "info":{"title":"Of Ink and Alchemy"}, | ||
| "authors":[{"name":"Sloane St. James"}], | ||
| "url":"/books/68b95a390bc0cee156edaf2b/of-ink-and-alchemy-sloane-st-james" | ||
| } | ||
| ] | ||
| }"# | ||
| .to_string()); | ||
| } | ||
| if u.path().starts_with("/json/search_authors") { | ||
| return Ok(r#"{ "success": true, "authors": [] }"#.to_string()); | ||
| } | ||
| if u.path().starts_with("/search") { | ||
| return Ok("<html><body>search</body></html>".to_string()); | ||
| } | ||
|
|
||
| Ok(r#" | ||
| <html><head> | ||
| <script type="application/ld+json"> | ||
| { | ||
| "@graph": [{ | ||
| "name": "Of Ink and Alchemy", | ||
| "author": [{"name":"Sloane St. James"}], | ||
| "description": "A dark contemporary romance with friends to lovers." | ||
| }] | ||
| } | ||
| </script> | ||
| </head><body> | ||
| <ul id="valid-topics-list"> | ||
| <li><a class="topic">Contemporary</a></li> | ||
| <li><a class="topic">Dark Romance</a></li> | ||
| <li><a class="topic">Age Difference</a></li> | ||
| <li><a class="topic">Friends to Lovers</a></li> | ||
| </ul> | ||
| </body></html> | ||
| "# | ||
| .to_string()) | ||
| } | ||
|
|
||
| async fn post( | ||
| &self, | ||
| _url: &str, | ||
| _body: Option<&str>, | ||
| _headers: &[(&str, &str)], | ||
| ) -> anyhow::Result<String> { | ||
| Err(anyhow::anyhow!("post not implemented in mock fetcher")) | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Duplicated mock implementation.
This MockFetcher is nearly identical to MockClient in mlm_meta/tests/mock_fetcher.rs. Consider sharing the mock implementation to reduce maintenance burden, either by making the test module accessible or by extracting a common test utilities crate.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/tests/metadata_integration.rs` around lines 15 - 77, The test defines
MockFetcher which duplicates MockClient in mlm_meta/tests/mock_fetcher.rs;
extract the shared mock into a common test utility (e.g., a pub(crate)
tests::mock_fetcher module in mlm_meta or a test-utils crate) and replace the
local MockFetcher with an import of that shared MockClient/MockFetcher;
specifically move the async_trait impl and helper responses into the shared
module, export the struct (MockClient or MockFetcher) and reuse it in
server/tests/metadata_integration.rs by updating the use/import and removing the
duplicated implementation so both tests call the same shared mock
implementation.
| let _default_timeout = StdDuration::from_secs(5); | ||
| let providers = cfg.metadata_providers.clone(); | ||
| // convert provider config to server metadata provider settings | ||
| let provider_settings: Vec<mlm::metadata::ProviderSetting> = providers | ||
| .iter() | ||
| .map(|p| match p { | ||
| mlm::config::ProviderConfig::Hardcover(c) => { | ||
| mlm::metadata::ProviderSetting::Hardcover { | ||
| enabled: c.enabled, | ||
| timeout_secs: c.timeout_secs, | ||
| api_key: c.api_key.clone(), | ||
| } | ||
| } | ||
| mlm::config::ProviderConfig::RomanceIo(c) => { | ||
| mlm::metadata::ProviderSetting::RomanceIo { | ||
| enabled: c.enabled, | ||
| timeout_secs: c.timeout_secs, | ||
| } | ||
| } | ||
| mlm::config::ProviderConfig::OpenLibrary(c) => { | ||
| mlm::metadata::ProviderSetting::OpenLibrary { | ||
| enabled: c.enabled, | ||
| timeout_secs: c.timeout_secs, | ||
| } | ||
| } | ||
| }) | ||
| .collect(); | ||
| let metadata = | ||
| MetadataService::from_settings(&provider_settings, std::time::Duration::from_secs(5)); | ||
| let metadata = Arc::new(metadata); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Unused MetadataService construction.
The MetadataService created on lines 118-120 is immediately discarded and replaced by a new one with the mock fetcher on lines 146-150. Consider removing this unused construction.
♻️ Proposed fix - remove unused construction
let _default_timeout = StdDuration::from_secs(5);
- let providers = cfg.metadata_providers.clone();
- // convert provider config to server metadata provider settings
- let provider_settings: Vec<mlm::metadata::ProviderSetting> = providers
- .iter()
- .map(|p| match p {
- mlm::config::ProviderConfig::Hardcover(c) => {
- mlm::metadata::ProviderSetting::Hardcover {
- enabled: c.enabled,
- timeout_secs: c.timeout_secs,
- api_key: c.api_key.clone(),
- }
- }
- mlm::config::ProviderConfig::RomanceIo(c) => {
- mlm::metadata::ProviderSetting::RomanceIo {
- enabled: c.enabled,
- timeout_secs: c.timeout_secs,
- }
- }
- mlm::config::ProviderConfig::OpenLibrary(c) => {
- mlm::metadata::ProviderSetting::OpenLibrary {
- enabled: c.enabled,
- timeout_secs: c.timeout_secs,
- }
- }
- })
- .collect();
- let metadata =
- MetadataService::from_settings(&provider_settings, std::time::Duration::from_secs(5));
- let metadata = Arc::new(metadata);
+ // Build the mock-based metadata service directly
+ let mock_fetcher = std::sync::Arc::new(MockFetcher);
+ let rom = mlm_meta::providers::RomanceIo::with_client(mock_fetcher.clone());
+ let metadata = Arc::new(mlm::metadata::MetadataService::new(
+ vec![(std::sync::Arc::new(rom), std::time::Duration::from_secs(5))],
+ std::time::Duration::from_secs(5),
+ ));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/tests/metadata_integration.rs` around lines 91 - 120, The test builds
a MetadataService via MetadataService::from_settings into the local variable
metadata but then immediately overwrites it with a mocked instance later; remove
the redundant construction by deleting the initial
MetadataService::from_settings(...) call and its Arc wrapping (the earlier
provider_settings and _default_timeout setup can remain if used), or
alternatively reuse that instance instead of replacing it—locate the
MetadataService::from_settings(...) and the let metadata = Arc::new(metadata)
lines and remove or adjust them so the mock fetcher assignment is the only
MetadataService construction.
| assert!( | ||
| meta.title.to_lowercase().contains("ink") | ||
| || !meta.categories.is_empty() | ||
| || !meta.tags.is_empty() | ||
| ); |
There was a problem hiding this comment.
Weak assertion may hide failures.
The assertion passes if any of three conditions is true. If the mock returns data with only a title containing "ink" but fails to populate categories/tags as expected, the test would still pass. Consider asserting specific expected fields.
💡 Suggested stricter assertion
- assert!(
- meta.title.to_lowercase().contains("ink")
- || !meta.categories.is_empty()
- || !meta.tags.is_empty()
- );
+ assert!(
+ meta.title.to_lowercase().contains("ink"),
+ "Expected title to contain 'ink', got: {}",
+ meta.title
+ );
+ assert!(
+ !meta.tags.is_empty(),
+ "Expected tags to be populated"
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| assert!( | |
| meta.title.to_lowercase().contains("ink") | |
| || !meta.categories.is_empty() | |
| || !meta.tags.is_empty() | |
| ); | |
| assert!( | |
| meta.title.to_lowercase().contains("ink"), | |
| "Expected title to contain 'ink', got: {}", | |
| meta.title | |
| ); | |
| assert!( | |
| !meta.tags.is_empty(), | |
| "Expected tags to be populated" | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/tests/metadata_integration.rs` around lines 163 - 167, The current
assertion on meta (meta.title, meta.categories, meta.tags) is too permissive
because it allows the test to pass if only one condition is true; update the
test to assert the specific expected fields explicitly: verify meta.title
contains the expected substring (e.g., "ink") AND assert meta.categories is not
empty and meta.tags is not empty (or assert exact expected category/tag values
if known). Locate the assertion referencing meta.title, meta.categories, and
meta.tags and replace the || combined check with separate assertions (or a
single combined && check) so failures in category/tag population cannot be
masked by the title condition.
26bdd5e to
9f90d08
Compare
2eb5b91 to
4e5aefe
Compare
f3cc43a to
4bf7ac2
Compare
4bf7ac2 to
8539922
Compare
d0f0ed6 to
1814e4a
Compare
1814e4a to
4f39b2c
Compare
8539922 to
a9d18fa
Compare
4f39b2c to
ef5ba33
Compare
a9d18fa to
ac1957f
Compare
Summary by CodeRabbit
Release Notes
New Features
Chores
Stack