Split out core logic and web ui to separate crates#12
Split out core logic and web ui to separate crates#12StirlingMouse wants to merge 2 commits intometadata-providersfrom
Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis pull request restructures the monorepo by extracting core functionality into a new Changes
Sequence Diagram(s)sequenceDiagram
participant Main as Main.rs
participant Runner as spawn_tasks()
participant Tasks as Background Tasks
participant SharedState as Stats/Events/Context
participant Component as Task Handler<br/>(e.g., Autograbber)
Main->>Runner: spawn_tasks(config, db, mam,<br/>stats, metadata)
Runner->>SharedState: Create Stats, Events, Backend
Runner->>Tasks: Launch task watchers &<br/>intervals for each task
Runner->>Component: Pass Context with triggers
Component->>SharedState: Subscribe to events,<br/>read config via context
Component->>Component: Perform work (search,<br/>link, clean)
Component->>SharedState: Emit events via<br/>events.event.0.send()
Component->>SharedState: Update stats on<br/>success/failure
Runner->>Main: Return Context with<br/>public triggers
sequenceDiagram
participant Web as Web Handler
participant Context as Context
participant Backend as SsrBackend
participant DB as Database
participant MAM as MAM API
participant Events as Events Channel
Web->>Context: context.db()
Context->>Backend: ssr().db
Backend->>DB: Return Arc<Database>
Web->>Context: context.mam()
Context->>Backend: ssr().mam
Backend->>MAM: Return Result<Arc<MaM>>
Web->>DB: Perform query
Web->>Events: write_event(db, events, event)
Events->>Events: events.event.0.send(Some(event))
Events-->>Web: Event broadcasted
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (11)
mlm_web_askama/src/pages/errors.rs (3)
89-91: 🧹 Nitpick | 🔵 TrivialReplace
eprintln!with structured logging viatracing.The crate already depends on
tracing. Usingeprintln!bypasses structured logging, log levels, and any configured log sinks.♻️ Proposed fix
action => { - eprintln!("unknown action: {action}"); + tracing::warn!(action, "unknown action in errors page form"); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mlm_web_askama/src/pages/errors.rs` around lines 89 - 91, The match arm currently prints unknown actions with eprintln!("unknown action: {action}"), which bypasses the crate's tracing-based structured logging; replace that call with a tracing macro (e.g., tracing::warn! or tracing::error!) so the message is emitted through the configured log sinks and includes structured fields like action; ensure the tracing macro is in scope (use tracing::{warn, error} or call tracing::warn!) and format the call as tracing::warn!(action = %action, "unknown action") so the action value is captured as structured data.
51-61: 🧹 Nitpick | 🔵 TrivialSorting logic is confusing due to double reversal.
The combination of
.rev()on line 32 andord.reverse()whenascis true creates hard-to-follow semantics. Consider clarifying the intent or documenting the expected behavior.♻️ Suggested clarification
if let Some(sort_by) = &sort.sort_by { errored_torrents.sort_by(|a, b| { let ord = match sort_by { ErrorsPageSort::Step => step(&a.id).cmp(step(&b.id)), ErrorsPageSort::Title => a.title.cmp(&b.title), ErrorsPageSort::Error => a.error.cmp(&b.error), ErrorsPageSort::CreatedAt => a.created_at.cmp(&b.created_at), }; - if sort.asc { ord.reverse() } else { ord } + // Default order is descending (from .rev() on scan); reverse for ascending + if sort.asc { ord.reverse() } else { ord } }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mlm_web_askama/src/pages/errors.rs` around lines 51 - 61, The sort comparator for errored_torrents is producing confusing double-reversal semantics (you currently call .rev() elsewhere and then use ord.reverse() when sort.asc is true); update the closure used in errored_torrents.sort_by so the comparator returns the natural ordering when sort.asc is true and the reversed ordering when sort.asc is false — e.g., compute ord from the match over ErrorsPageSort (using step(&a.id), a.title, a.error, a.created_at) and then return if sort.asc { ord } else { ord.reverse() }; if you actually need a global .rev() call elsewhere, remove or invert that call instead so the overall behavior matches the intuitive meaning of sort.asc.
76-87: 🧹 Nitpick | 🔵 TrivialTransaction per removal is inefficient; batch into single transaction.
Each iteration creates a new read-write transaction. For bulk removal, this adds unnecessary overhead and loses atomicity across the batch.
♻️ Proposed batch transaction
"remove" => { + let (_guard, rw) = db.rw_async().await?; for error in form.errors { let Ok(error) = serde_json::from_str::<ErroredTorrentId>(&error) else { return Err(anyhow::Error::msg("Could not parse error").into()); }; - let (_guard, rw) = db.rw_async().await?; let Some(error) = rw.get().primary::<ErroredTorrent>(error)? else { return Err(anyhow::Error::msg("Could not find error").into()); }; rw.remove(error)?; - rw.commit()?; } + rw.commit()?; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mlm_web_askama/src/pages/errors.rs` around lines 76 - 87, The loop currently opens a new read-write transaction for each error; instead, call db.rw_async().await once to obtain (_guard, rw) before iterating form.errors, parse each serde_json::from_str::<ErroredTorrentId>(&error) and then use rw.get().primary::<ErroredTorrent>(parsed_id)? to find the record, call rw.remove(found_error)? for each, and after the loop call rw.commit()? once to commit the batch atomically (return early on parse/missing errors as before).mlm_db/src/lib.rs (1)
193-202:⚠️ Potential issue | 🔴 Critical
Flagsno longer round-trips through JSON.
#[serde(try_from = "HashMap<String, bool>")]only customizes deserialization. With derivedSerialize,Flagsserializes as a struct with optional fields, emittingnullfor anyNonevalues. This struct-shaped JSON cannot be deserialized back through theHashMap<String, bool>format, breaking round-trip serialization for default or partialFlagsvalues.Add
into = "HashMap<String, bool>"to the serde attribute and implementFrom<Flags> for HashMap<String, bool>to serialize only the non-None fields in HashMap format.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mlm_db/src/lib.rs` around lines 193 - 202, The Flags struct currently only customizes deserialization via #[serde(try_from = "HashMap<String, bool>")] which breaks JSON round-trips because serialization still emits a struct with optional fields; change the serde attribute to #[serde(try_from = "HashMap<String, bool>", into = "HashMap<String, bool>")] and implement From<Flags> for HashMap<String, bool> that iterates over Flags' fields (crude_language, violence, some_explicit, explicit, abridged, lgbt) and inserts only those with Some(true/false) into the HashMap so serialization produces the same HashMap-shaped JSON that deserialization expects.mlm_core/src/config.rs (2)
162-185:⚠️ Potential issue | 🟠 MajorAdd
deny_unknown_fieldsto TorrentSearch and SnatchlistSearch to catch stale configs.If old configs nested filter settings under
edition = { ... }at the search level, that key is now unknown and will silently deserialize with filter defaults instead of failing fast, potentially broadening autograb/snatchlist queries unexpectedly. Addingdeny_unknown_fieldsto both structs would make the incompatibility explicit and prevent silent data loss during migration.Also applies to: 206-219
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mlm_core/src/config.rs` around lines 162 - 185, Add serde's deny_unknown_fields to the TorrentSearch and SnatchlistSearch structs so unknown keys (like a stale nested "edition" key) cause deserialization to fail rather than silently being ignored; locate the struct definitions for TorrentSearch and SnatchlistSearch and add #[serde(deny_unknown_fields)] to each (placed alongside the existing derive/serde attributes) to enforce explicit config migrations.
21-29:⚠️ Potential issue | 🟠 MajorKeep secret-bearing config out of generic serialization, or explicitly redact fields before serializing.
With
Config,HardcoverConfig,AudiobookShelfConfig,NotionList, andQbitConfigderivingSerializewithout field-level redaction, theapi_key,token, andpasswordfields are exposed to accidental serialization in logs, error responses, or debug output. While the current web layer doesn't directly return the full config as JSON, relying on template-level hiding (as done for qbittorrent password) is not a proper security control. Use#[serde(skip_serializing)]on secret fields or maintain separate sanitized config types for web exposure.Also applies to: 79-144, 153-160, 238-252
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mlm_core/src/config.rs` around lines 21 - 29, The secret-bearing config structs (e.g., HardcoverConfig::api_key, Config, AudiobookShelfConfig, NotionList, QbitConfig and their token/password fields) must not be serializable for outputs; either annotate each secret field with #[serde(skip_serializing)] or introduce explicit sanitized DTOs used for web/debug output, then update any code paths that serialize these structs to use the sanitized types. Locate the structs named HardcoverConfig, QbitConfig, AudiobookShelfConfig, NotionList and the top-level Config and add #[serde(skip_serializing)] to fields like api_key, token, password (or create corresponding *_Public / *_Sanitized structs and map fields over) and ensure any serialization calls use the sanitized versions instead.mlm_web_askama/src/pages/selected.rs (1)
174-176: 🧹 Nitpick | 🔵 TrivialInconsistent state extraction pattern.
selected_torrents_page_postusesState<Arc<Database<'static>>>directly whileselected_pageusesState<Context>. Consider migrating this handler to useState<Context>withcontext.db()for consistency with the rest of the codebase.♻️ Proposed refactor for consistency
pub async fn selected_torrents_page_post( - State(db): State<Arc<Database<'static>>>, + State(context): State<Context>, uri: OriginalUri, Form(form): Form<TorrentsPageForm>, ) -> Result<Redirect, AppError> { match form.action.as_str() { "remove" => { for torrent in form.torrents { - let (_guard, rw) = db.rw_async().await?; + let (_guard, rw) = context.db().rw_async().await?; let Some(mut torrent) = rw.get().primary::<SelectedTorrent>(torrent)? else {Apply similar changes to the "update" branch at line 198.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mlm_web_askama/src/pages/selected.rs` around lines 174 - 176, The handler selected_torrents_page_post currently extracts State<Arc<Database<'static>>> directly; change its signature to accept State<Context> (matching selected_page) and inside the function obtain the DB via context.db() when you need the Database; update any call sites and the "update" branch handler (the other POST/update handler around the same area) to follow the same pattern so both selected_torrents_page_post and the update handler use State<Context> and context.db() for consistency.mlm_core/src/lists/goodreads.rs (1)
153-164: 🧹 Nitpick | 🔵 TrivialConsider a context struct to reduce parameter count.
The
#[allow(clippy::too_many_arguments)]suppression addresses the lint, butsearch_itemnow has 8 parameters. Consider grouping related parameters into a struct (e.g.,SearchContextcontainingconfig,db,mam,events) to improve readability and make future parameter additions easier.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mlm_core/src/lists/goodreads.rs` around lines 153 - 164, The function search_item currently takes many parameters (Config, Database, MaM, GoodreadsList, Item, ListItem, max_torrents, Events) and uses a clippy allow; create a new struct (e.g., SearchContext) that groups together related dependencies such as config: Config, db: Database<'_>, mam: MaM<'_>, and events: crate::stats::Events (or similar), update search_item signature to accept &SearchContext plus the remaining per-item parameters (list: &GoodreadsList, item: &Item, db_item: ListItem, max_torrents: u64), and update all call sites to construct and pass a SearchContext instance; ensure types and lifetimes mirror the originals so methods inside search_item keep functioning.mlm_web_askama/src/pages/index.rs (1)
107-147: 🧹 Nitpick | 🔵 TrivialSilent failure when triggers are absent.
The guarded trigger pattern (
if let Some(tx) = ...) silently does nothing when the trigger channel isNone. Previously, this would have caused a compile error or runtime panic if the fields were non-optional.For a deprecated file this is acceptable, but note that users won't receive feedback if an action fails due to a missing trigger.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mlm_web_askama/src/pages/index.rs` around lines 107 - 147, The guarded trigger branches (e.g., "run_torrent_linker", "run_folder_linker", "run_search", "run_import", "run_downloader", "run_abs_matcher") silently noop when their channel in context.triggers (torrent_linker_tx, folder_linker_tx, search_tx.get(...), import_tx.get(...), downloader_tx, audiobookshelf_tx) is None; update each branch to surface the missing-trigger condition by returning an Err or logging an explicit error (for run_search/run_import also validate form.index beforehand) instead of silently doing nothing so callers and users receive feedback when the trigger channel is absent.mlm_web_askama/src/pages/torrents.rs (1)
599-606: 🧹 Nitpick | 🔵 TrivialInconsistent config access pattern.
This code uses
context.config.lock().awaitdirectly, while other parts of this file and the codebase usecontext.config().await. For consistency and encapsulation, prefer the accessor method.♻️ Proposed fix
let template = TorrentsPageTemplate { - abs_url: context - .config - .lock() - .await + abs_url: context.config().await .audiobookshelf .as_ref() .map(|abs| abs.url.clone()),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@mlm_web_askama/src/pages/torrents.rs` around lines 599 - 606, The code instantiates TorrentsPageTemplate by directly calling context.config.lock().await; change this to use the accessor method context.config().await to match the rest of the codebase and preserve encapsulation—locate the TorrentsPageTemplate construction and replace any direct .config.lock().await usage with the asynchronous accessor context.config().await while keeping the same field access (e.g., .audiobookshelf.as_ref().map(|abs| abs.url.clone())).server/tests/metadata_integration.rs (1)
171-176:⚠️ Potential issue | 🟡 MinorStrengthen the enrichment assertion.
This still passes when romance.io parsing stops populating categories/tags, because the mocked title alone satisfies the
||chain. Assert the title separately and require at least one enriched field so the test actually guards the parsing path.Suggested fix
- assert!( - meta.title.to_lowercase().contains("ink") - || !meta.categories.is_empty() - || !meta.tags.is_empty() - ); + assert_eq!(meta.title, "Of Ink and Alchemy"); + assert!( + !meta.categories.is_empty() || !meta.tags.is_empty(), + "expected romance.io enrichment to add categories or tags", + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/tests/metadata_integration.rs` around lines 171 - 176, The current compound assert lets a mocked title alone satisfy the check; update the test in metadata_integration.rs to split into two assertions: one that explicitly asserts the title contains the expected substring (use meta.title.to_lowercase().contains("ink") or similar), and a second that requires at least one enrichment field to be present (assert that meta.categories is not empty OR meta.tags is not empty). Locate the assertions that reference meta.title, meta.categories, and meta.tags and replace the single assert(...) with the two clearer asserts described above.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@mlm_core/Cargo.toml`:
- Line 16: The git dependencies in Cargo.toml (e.g., the lava_torrent entry)
should be pinned to specific commit SHAs to ensure reproducible builds; update
each git dependency line (lava_torrent, native_db, qbit, sanitize-filename) to
include the rev = "<commit-sha>" (or replace branch URL with the exact commit)
instead of relying on a branch head so the dependency fetch is deterministic and
stable.
- Line 46: The tokio dependency declaration is redundant because the "full"
feature includes "sync" and "macros"; edit the tokio entry in Cargo.toml (the
line with tokio = { version = "1.45.1", features = [...] }) to remove the
explicit "sync" and "macros" features and leave only features = ["full"] (or
omit features entirely if you want default), saving noise while preserving
behavior.
- Line 30: The reqwest dependency in mlm_core uses the OpenSSL backend (features
= ["json", "default-tls"]); change it to use rustls by replacing the feature
"default-tls" with "rustls-tls" (keep default-features = false and "json"), and
make the same change for any other crates that currently use default-tls (e.g.,
mlm_web_askama) so all workspace crates (mlm_core, mlm_web_askama, mlm_mam,
mlm_meta) consistently depend on reqwest with features = ["json","rustls-tls"]
to standardize the TLS backend and reduce binary size and dependency
duplication.
In `@mlm_core/src/config_impl.rs`:
- Around line 177-299: mam_search() currently omits several search parameters
(sort_by, filter.edition.media_type, Type::Mine, Type::Uploader(_), and the
musicology/radio category sets) which causes it to generate a broadened MaM URL;
update TorrentSearch::mam_search to either encode each of these supported
variants into the URL query (map sort_by -> tor[sort_type]/tor[sort_by] as
appropriate, include edition.media_type in the tor[cat] or tor[media] params,
handle Type::Mine and Type::Uploader(_) by adding the appropriate tor[owner] or
tor[user] query keys, and include musicology/radio categories when iterating
categories) or make mam_search return an error/None when encountering any
unsupported variant so callers fail fast; locate and modify mam_search, the
sort_by enum/field, Type enum variants (Mine, Uploader), and the
edition.categories sets (musicology, radio) to add the missing mappings or
early-return behavior.
In `@mlm_core/src/lib.rs`:
- Around line 40-55: Change the ContextExt::ssr accessor to be fallible (return
Option<&SsrBackend> or anyhow::Result<&SsrBackend>) instead of unconditionally
panicking: check Context::backend for None and perform a
downcast_ref::<SsrBackend>(), returning a None/Err on failure rather than
calling expect; then update the dependent accessors db(), mam(), and metadata()
to propagate or map that fallible result (or accept Option) so they no longer
call ssr() which panics—handle unwrapping/errors at each call site where an
SSR-specific backend is actually required.
- Around line 61-66: The mam() helper currently discards the underlying error
from self.ssr().mam and always returns a generic anyhow::anyhow!("mam_id
error"); update mam() to propagate or wrap the original error instead of
replacing it — e.g., map the Err(e) from self.ssr().mam to an anyhow error that
includes e (or use .with_context on the error) so callers of mam() receive the
original failure details; refer to the mam() function and self.ssr().mam in your
changes.
In `@mlm_core/src/linker/folder.rs`:
- Line 339: Fix the inconsistent indentation for the closing statement "};" so
it uses 4 spaces like the rest of the file; locate the closing brace (the "};"
token shown in the diff) in the same scope as the surrounding code (e.g., the
end of the function/struct/impl block in folder.rs) and adjust its indentation
from 3 spaces to 4 spaces to match the project's style.
In `@mlm_core/src/qbittorrent.rs`:
- Around line 212-223: The current code swallows qBittorrent RPC errors by using
`let Ok(torrents) = qbit.torrents(...).await else { continue; }`, which turns
transport/auth/listing failures into “not found”; change this to propagate
errors instead of continuing: replace that `let Ok(...)` pattern with a proper
error propagation (e.g., use the `?` operator or `match` and return Err) when
calling `qbit.torrents(Some(TorrentListParams { hashes:
Some(vec![hash.to_string()]), ..TorrentListParams::default() })) .await`, then
keep the `let Some(torrent) = torrents.into_iter().next() else { continue; }`
logic only for the actual empty-result case; reference symbols: `qbit`,
`torrents`, `TorrentListParams`, `hash`, and the `torrents.into_iter().next()`
check.
In `@mlm_core/src/runner.rs`:
- Around line 496-498: The error context string is incorrect: change the
.context("link_torrents_to_library") call on the result of
link_folders_to_library(...) in runner.rs to use a correct, matching context
like "link_folders_to_library" (or another descriptive identifier referencing
the folder linker task) so the error message correctly identifies the failing
operation.
In `@mlm_db/Cargo.toml`:
- Line 26: The uuid dependency in the mlm_db crate is enabling the "js" feature
unconditionally; remove "js" from the features list for the uuid dependency in
mlm_db (leave ["serde","v4"]) or switch to a wasm-only conditional so only WASM
targets get uuid with "js" (e.g., add a target-specific dependency for
cfg(target_arch = "wasm32") that includes ["serde","v4","js"]); update the uuid
entry in mlm_db's Cargo.toml and ensure any WASM-only crates that need the "js"
feature add it in their own Cargo.toml instead.
In `@mlm_meta/tests/mock_openlibrary.rs`:
- Around line 11-15: The mock's host and path checks are too permissive: replace
the contains and starts_with checks on u with exact matches to the expected host
and endpoint. For example, change the host check to require u.host_str() ==
Some("openlibrary.org") (or u.host_str().map_or(false, |h| h ==
"openlibrary.org")) and change the path check to require u.path() ==
"/search.json" (or an exact match for the full expected path), so the mock fails
for variants like "notopenlibrary.org" or "/search.json.bak". Ensure you update
the error messages used in the Err(...) calls on the same branches so they
reflect the exact-match expectation.
In `@mlm_web_askama/build.rs`:
- Around line 3-11: The build script's main currently always updates the DATE
env var; modify main to emit a "cargo:rerun-if-changed=build.rs" directive
(printed before setting DATE) so Cargo only re-runs this script when the build
script itself changes, or if you intend per-build timestamps instead, add a
short comment atop main clarifying that continuous updates are intentional;
ensure the change references the existing main function and the DATE env var
emission so maintainers can find it easily.
- Around line 7-9: Replace the unsafe unwrap() call on
now.duration_since(SystemTime::UNIX_EPOCH) with a defensive expect() that
includes a clear message; locate the expression using
now.duration_since(SystemTime::UNIX_EPOCH).unwrap().as_secs() in build.rs and
change unwrap() to expect("system time is before UNIX_EPOCH when computing build
timestamp") so the code still returns as_secs() but yields a helpful error if
the impossible occurs.
In `@mlm_web_askama/Cargo.toml`:
- Around line 17-19: The Cargo.toml currently references git dependencies by
branch for native_db and qbit (native_db = { git =
"https://github.com/StirlingMouse/native_db.git", branch = "0.8.x" } and qbit =
{ git = "https://github.com/StirlingMouse/qbittorrent-webui-api.git" }); change
these to pin explicit commit SHAs using the rev (or tag) field to ensure
reproducible builds (e.g., replace branch = "0.8.x" with rev = "<commit-sha>"
and add rev = "<commit-sha>" for qbit), obtaining the correct commit SHAs from
the respective repositories and updating Cargo.toml accordingly.
In `@mlm_web_askama/src/pages/config.rs`:
- Around line 189-192: The match arm that currently does `eprintln!("unknown
action: {action}");` should not silently print to stderr; replace the
`eprintln!` with a structured `tracing::warn!` call and return an explicit
error/BadRequest from the enclosing handler instead of falling through. Locate
the match arm handling `action` (the branch with `action => { ... }`) and change
it to call `tracing::warn!("unknown action: {}", action)` and then return an
appropriate `Err(...)` or HTTP 400 response (matching the function's return
type) so callers/clients and logs consistently surface the problem. Ensure the
returned error type matches the function signature (e.g., convert to the
handler's Error/Result or return an Actix/warp HTTP bad request).
- Around line 96-146: The loop currently acquires mam_semaphore and does the MAM
work inline, causing sequential MAM requests; refactor the Err branch so it
spawns a concurrent task/future (e.g., via tokio::spawn or collecting futures
for join_all) that captures mam_id, clones mam_semaphore, context, config,
tag_filter (or necessary refs) and within the task acquires permit =
mam_semaphore.clone().acquire_owned().await, performs
mam.get_torrent_info_by_id(mam_id).await, calls update_torrent_meta(...) if
needed, evaluates tag_filter.filter.matches(&mam_torrent) and returns whether
the original torrent should be pushed; then collect task results and merge
matching torrents into matched_torrents after join_all; keep synchronous
matches_lib Ok branch as-is and preserve error handling/logging.
In `@server/Cargo.toml`:
- Line 27: The serde_json dependency is declared as "1.0" which is inconsistent
with the more specific "1.0.140" used in mlm_core and mlm_web_askama; update the
serde_json entry in this Cargo.toml to the exact same version string ("1.0.140")
so the workspace uses a consistent version spec across crates (search for the
serde_json line and replace its version to match mlm_core/mlm_web_askama).
---
Outside diff comments:
In `@mlm_core/src/config.rs`:
- Around line 162-185: Add serde's deny_unknown_fields to the TorrentSearch and
SnatchlistSearch structs so unknown keys (like a stale nested "edition" key)
cause deserialization to fail rather than silently being ignored; locate the
struct definitions for TorrentSearch and SnatchlistSearch and add
#[serde(deny_unknown_fields)] to each (placed alongside the existing
derive/serde attributes) to enforce explicit config migrations.
- Around line 21-29: The secret-bearing config structs (e.g.,
HardcoverConfig::api_key, Config, AudiobookShelfConfig, NotionList, QbitConfig
and their token/password fields) must not be serializable for outputs; either
annotate each secret field with #[serde(skip_serializing)] or introduce explicit
sanitized DTOs used for web/debug output, then update any code paths that
serialize these structs to use the sanitized types. Locate the structs named
HardcoverConfig, QbitConfig, AudiobookShelfConfig, NotionList and the top-level
Config and add #[serde(skip_serializing)] to fields like api_key, token,
password (or create corresponding *_Public / *_Sanitized structs and map fields
over) and ensure any serialization calls use the sanitized versions instead.
In `@mlm_core/src/lists/goodreads.rs`:
- Around line 153-164: The function search_item currently takes many parameters
(Config, Database, MaM, GoodreadsList, Item, ListItem, max_torrents, Events) and
uses a clippy allow; create a new struct (e.g., SearchContext) that groups
together related dependencies such as config: Config, db: Database<'_>, mam:
MaM<'_>, and events: crate::stats::Events (or similar), update search_item
signature to accept &SearchContext plus the remaining per-item parameters (list:
&GoodreadsList, item: &Item, db_item: ListItem, max_torrents: u64), and update
all call sites to construct and pass a SearchContext instance; ensure types and
lifetimes mirror the originals so methods inside search_item keep functioning.
In `@mlm_db/src/lib.rs`:
- Around line 193-202: The Flags struct currently only customizes
deserialization via #[serde(try_from = "HashMap<String, bool>")] which breaks
JSON round-trips because serialization still emits a struct with optional
fields; change the serde attribute to #[serde(try_from = "HashMap<String,
bool>", into = "HashMap<String, bool>")] and implement From<Flags> for
HashMap<String, bool> that iterates over Flags' fields (crude_language,
violence, some_explicit, explicit, abridged, lgbt) and inserts only those with
Some(true/false) into the HashMap so serialization produces the same
HashMap-shaped JSON that deserialization expects.
In `@mlm_web_askama/src/pages/errors.rs`:
- Around line 89-91: The match arm currently prints unknown actions with
eprintln!("unknown action: {action}"), which bypasses the crate's tracing-based
structured logging; replace that call with a tracing macro (e.g., tracing::warn!
or tracing::error!) so the message is emitted through the configured log sinks
and includes structured fields like action; ensure the tracing macro is in scope
(use tracing::{warn, error} or call tracing::warn!) and format the call as
tracing::warn!(action = %action, "unknown action") so the action value is
captured as structured data.
- Around line 51-61: The sort comparator for errored_torrents is producing
confusing double-reversal semantics (you currently call .rev() elsewhere and
then use ord.reverse() when sort.asc is true); update the closure used in
errored_torrents.sort_by so the comparator returns the natural ordering when
sort.asc is true and the reversed ordering when sort.asc is false — e.g.,
compute ord from the match over ErrorsPageSort (using step(&a.id), a.title,
a.error, a.created_at) and then return if sort.asc { ord } else { ord.reverse()
}; if you actually need a global .rev() call elsewhere, remove or invert that
call instead so the overall behavior matches the intuitive meaning of sort.asc.
- Around line 76-87: The loop currently opens a new read-write transaction for
each error; instead, call db.rw_async().await once to obtain (_guard, rw) before
iterating form.errors, parse each
serde_json::from_str::<ErroredTorrentId>(&error) and then use
rw.get().primary::<ErroredTorrent>(parsed_id)? to find the record, call
rw.remove(found_error)? for each, and after the loop call rw.commit()? once to
commit the batch atomically (return early on parse/missing errors as before).
In `@mlm_web_askama/src/pages/index.rs`:
- Around line 107-147: The guarded trigger branches (e.g., "run_torrent_linker",
"run_folder_linker", "run_search", "run_import", "run_downloader",
"run_abs_matcher") silently noop when their channel in context.triggers
(torrent_linker_tx, folder_linker_tx, search_tx.get(...), import_tx.get(...),
downloader_tx, audiobookshelf_tx) is None; update each branch to surface the
missing-trigger condition by returning an Err or logging an explicit error (for
run_search/run_import also validate form.index beforehand) instead of silently
doing nothing so callers and users receive feedback when the trigger channel is
absent.
In `@mlm_web_askama/src/pages/selected.rs`:
- Around line 174-176: The handler selected_torrents_page_post currently
extracts State<Arc<Database<'static>>> directly; change its signature to accept
State<Context> (matching selected_page) and inside the function obtain the DB
via context.db() when you need the Database; update any call sites and the
"update" branch handler (the other POST/update handler around the same area) to
follow the same pattern so both selected_torrents_page_post and the update
handler use State<Context> and context.db() for consistency.
In `@mlm_web_askama/src/pages/torrents.rs`:
- Around line 599-606: The code instantiates TorrentsPageTemplate by directly
calling context.config.lock().await; change this to use the accessor method
context.config().await to match the rest of the codebase and preserve
encapsulation—locate the TorrentsPageTemplate construction and replace any
direct .config.lock().await usage with the asynchronous accessor
context.config().await while keeping the same field access (e.g.,
.audiobookshelf.as_ref().map(|abs| abs.url.clone())).
In `@server/tests/metadata_integration.rs`:
- Around line 171-176: The current compound assert lets a mocked title alone
satisfy the check; update the test in metadata_integration.rs to split into two
assertions: one that explicitly asserts the title contains the expected
substring (use meta.title.to_lowercase().contains("ink") or similar), and a
second that requires at least one enrichment field to be present (assert that
meta.categories is not empty OR meta.tags is not empty). Locate the assertions
that reference meta.title, meta.categories, and meta.tags and replace the single
assert(...) with the two clearer asserts described above.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: c0423522-ff34-4228-888a-add37d8d7d07
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (84)
Cargo.tomlmlm_core/Cargo.tomlmlm_core/src/audiobookshelf.rsmlm_core/src/autograbber.rsmlm_core/src/cleaner.rsmlm_core/src/config.rsmlm_core/src/config_impl.rsmlm_core/src/context.rsmlm_core/src/exporter.rsmlm_core/src/lib.rsmlm_core/src/linker/common.rsmlm_core/src/linker/duplicates.rsmlm_core/src/linker/folder.rsmlm_core/src/linker/libation_cats.rsmlm_core/src/linker/mod.rsmlm_core/src/linker/torrent.rsmlm_core/src/lists/goodreads.rsmlm_core/src/lists/mod.rsmlm_core/src/lists/notion.rsmlm_core/src/logging.rsmlm_core/src/metadata/mam_meta.rsmlm_core/src/metadata/mod.rsmlm_core/src/qbittorrent.rsmlm_core/src/runner.rsmlm_core/src/snatchlist.rsmlm_core/src/stats.rsmlm_core/src/torrent_downloader.rsmlm_db/Cargo.tomlmlm_db/src/lib.rsmlm_db/src/v03.rsmlm_mam/Cargo.tomlmlm_meta/Cargo.tomlmlm_meta/tests/mock_openlibrary.rsmlm_web_askama/Cargo.tomlmlm_web_askama/build.rsmlm_web_askama/src/api/mod.rsmlm_web_askama/src/api/search.rsmlm_web_askama/src/api/torrent.rsmlm_web_askama/src/lib.rsmlm_web_askama/src/pages/config.rsmlm_web_askama/src/pages/duplicate.rsmlm_web_askama/src/pages/errors.rsmlm_web_askama/src/pages/events.rsmlm_web_askama/src/pages/index.rsmlm_web_askama/src/pages/list.rsmlm_web_askama/src/pages/lists.rsmlm_web_askama/src/pages/mod.rsmlm_web_askama/src/pages/replaced.rsmlm_web_askama/src/pages/search.rsmlm_web_askama/src/pages/selected.rsmlm_web_askama/src/pages/torrent.rsmlm_web_askama/src/pages/torrent_edit.rsmlm_web_askama/src/pages/torrents.rsmlm_web_askama/src/tables.rsmlm_web_askama/templates/base.htmlmlm_web_askama/templates/pages/config.htmlmlm_web_askama/templates/pages/duplicate.htmlmlm_web_askama/templates/pages/errors.htmlmlm_web_askama/templates/pages/events.htmlmlm_web_askama/templates/pages/index.htmlmlm_web_askama/templates/pages/list.htmlmlm_web_askama/templates/pages/lists.htmlmlm_web_askama/templates/pages/replaced.htmlmlm_web_askama/templates/pages/search.htmlmlm_web_askama/templates/pages/selected.htmlmlm_web_askama/templates/pages/torrent.htmlmlm_web_askama/templates/pages/torrent_edit.htmlmlm_web_askama/templates/pages/torrent_mam.htmlmlm_web_askama/templates/pages/torrents.htmlmlm_web_askama/templates/partials/cost_icon.htmlmlm_web_askama/templates/partials/filter.htmlmlm_web_askama/templates/partials/flag_icons.htmlmlm_web_askama/templates/partials/mam_torrents.htmlserver/Cargo.tomlserver/src/bin/libation_unmapped_categories.rsserver/src/lib.rsserver/src/main.rsserver/src/web/pages/config.rsserver/tests/cleaner_test.rsserver/tests/common/mod.rsserver/tests/config_impl.rsserver/tests/linker_folder_test.rsserver/tests/linker_torrent_test.rsserver/tests/metadata_integration.rs
💤 Files with no reviewable changes (2)
- server/src/lib.rs
- server/src/web/pages/config.rs
| futures = "0.3" | ||
| htmlentity = "1.3.2" | ||
| itertools = "0.14.0" | ||
| lava_torrent = { git = "https://github.com/StirlingMouse/lava_torrent.git" } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Pin git dependencies to specific commits for reproducible builds.
Multiple git dependencies (lava_torrent, native_db, qbit, sanitize-filename) reference branches without commit hashes. This can cause builds to break or behave differently when upstream branches are updated.
♻️ Suggested approach
-lava_torrent = { git = "https://github.com/StirlingMouse/lava_torrent.git" }
+lava_torrent = { git = "https://github.com/StirlingMouse/lava_torrent.git", rev = "<commit-hash>" }Apply similar changes to native_db, qbit, and sanitize-filename.
Also applies to: 23-23, 27-27, 32-32
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_core/Cargo.toml` at line 16, The git dependencies in Cargo.toml (e.g.,
the lava_torrent entry) should be pinned to specific commit SHAs to ensure
reproducible builds; update each git dependency line (lava_torrent, native_db,
qbit, sanitize-filename) to include the rev = "<commit-sha>" (or replace branch
URL with the exact commit) instead of relying on a branch head so the dependency
fetch is deterministic and stable.
mlm_core/Cargo.toml
Outdated
| qbit = { git = "https://github.com/StirlingMouse/qbittorrent-webui-api.git" } | ||
| quick-xml = { version = "0.38.0", features = ["serialize"] } | ||
| regex = "1.12.2" | ||
| reqwest = { version = "0.12.20", default-features = false, features = ["json", "default-tls"] } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check TLS backend usage across workspace
rg -n 'rustls-tls|default-tls|native-tls' --type tomlRepository: StirlingMouse/MLM
Length of output: 634
Standardize TLS backend across workspace.
The workspace has inconsistent TLS backends: mlm_core and mlm_web_askama use default-tls (OpenSSL), while mlm_mam and mlm_meta use rustls-tls. Mixing backends increases binary size and complicates dependency management. Consider selecting one backend and applying it consistently across all crates.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_core/Cargo.toml` at line 30, The reqwest dependency in mlm_core uses the
OpenSSL backend (features = ["json", "default-tls"]); change it to use rustls by
replacing the feature "default-tls" with "rustls-tls" (keep default-features =
false and "json"), and make the same change for any other crates that currently
use default-tls (e.g., mlm_web_askama) so all workspace crates (mlm_core,
mlm_web_askama, mlm_mam, mlm_meta) consistently depend on reqwest with features
= ["json","rustls-tls"] to standardize the TLS backend and reduce binary size
and dependency duplication.
mlm_core/Cargo.toml
Outdated
| "macros", | ||
| "serde", | ||
| ] } | ||
| tokio = { version = "1.45.1", features = ["sync", "macros", "full"] } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Redundant tokio features.
The "full" feature already includes "sync" and "macros". Specifying them separately is harmless but adds noise.
♻️ Proposed simplification
-tokio = { version = "1.45.1", features = ["sync", "macros", "full"] }
+tokio = { version = "1.45.1", features = ["full"] }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| tokio = { version = "1.45.1", features = ["sync", "macros", "full"] } | |
| tokio = { version = "1.45.1", features = ["full"] } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_core/Cargo.toml` at line 46, The tokio dependency declaration is
redundant because the "full" feature includes "sync" and "macros"; edit the
tokio entry in Cargo.toml (the line with tokio = { version = "1.45.1", features
= [...] }) to remove the explicit "sync" and "macros" features and leave only
features = ["full"] (or omit features entirely if you want default), saving
noise while preserving behavior.
| impl TorrentSearch { | ||
| pub fn mam_search(&self) -> String { | ||
| let mut url: Url = "https://www.myanonamouse.net/tor/browse.php?thumbnail=true" | ||
| .parse() | ||
| .unwrap(); | ||
|
|
||
| { | ||
| let mut query = url.query_pairs_mut(); | ||
| if let Some(text) = &self.query { | ||
| query.append_pair("tor[text]", text); | ||
| } | ||
|
|
||
| for srch_in in &self.search_in { | ||
| query.append_pair(&format!("tor[srchIn][{}]", srch_in.as_str()), "true"); | ||
| } | ||
| let search_in = match self.kind { | ||
| Type::Bookmarks => "bookmarks", | ||
| _ => "torrents", | ||
| }; | ||
| query.append_pair("tor[searchIn]", search_in); | ||
| let search_type = match (self.kind, self.cost) { | ||
| (Type::Freeleech, _) => "fl", | ||
| (_, Cost::Free) => "fl-VIP", | ||
| _ => "all", | ||
| }; | ||
| query.append_pair("tor[searchType]", search_type); | ||
| let sort_type = match self.kind { | ||
| Type::New => "dateDesc", | ||
| _ => "", | ||
| }; | ||
| if !sort_type.is_empty() { | ||
| query.append_pair("tor[sort_type]", sort_type); | ||
| } | ||
| for cat in self | ||
| .filter | ||
| .edition | ||
| .categories | ||
| .audio | ||
| .clone() | ||
| .unwrap_or_else(AudiobookCategory::all) | ||
| .into_iter() | ||
| .map(|c: AudiobookCategory| c.to_id()) | ||
| { | ||
| query.append_pair("tor[cat][]", &cat.to_string()); | ||
| } | ||
| for cat in self | ||
| .filter | ||
| .edition | ||
| .categories | ||
| .ebook | ||
| .clone() | ||
| .unwrap_or_else(EbookCategory::all) | ||
| .into_iter() | ||
| .map(|c: EbookCategory| c.to_id()) | ||
| { | ||
| query.append_pair("tor[cat][]", &cat.to_string()); | ||
| } | ||
| for lang in &self.filter.edition.languages { | ||
| query.append_pair("tor[browse_lang][]", &lang.to_id().to_string()); | ||
| } | ||
|
|
||
| let (flags_is_hide, flags) = self.filter.edition.flags.as_search_bitfield(); | ||
| if !flags.is_empty() { | ||
| query.append_pair( | ||
| "tor[browseFlagsHideVsShow]", | ||
| if flags_is_hide { "0" } else { "1" }, | ||
| ); | ||
| } | ||
| for flag in flags { | ||
| query.append_pair("tor[browseFlags][]", &flag.to_string()); | ||
| } | ||
|
|
||
| if self.filter.edition.min_size.bytes() > 0 || self.filter.edition.max_size.bytes() > 0 | ||
| { | ||
| query.append_pair("tor[unit]", "1"); | ||
| } | ||
| if self.filter.edition.min_size.bytes() > 0 { | ||
| query.append_pair( | ||
| "tor[minSize]", | ||
| &self.filter.edition.min_size.bytes().to_string(), | ||
| ); | ||
| } | ||
| if self.filter.edition.max_size.bytes() > 0 { | ||
| query.append_pair( | ||
| "tor[maxSize]", | ||
| &self.filter.edition.max_size.bytes().to_string(), | ||
| ); | ||
| } | ||
|
|
||
| if let Some(uploaded_after) = self.filter.uploaded_after { | ||
| query.append_pair( | ||
| "tor[startDate]", | ||
| &uploaded_after.format(&DATE_FORMAT).unwrap(), | ||
| ); | ||
| } | ||
| if let Some(uploaded_before) = self.filter.uploaded_before { | ||
| query.append_pair( | ||
| "tor[endDate]", | ||
| &uploaded_before.format(&DATE_FORMAT).unwrap(), | ||
| ); | ||
| } | ||
| if let Some(min_seeders) = self.filter.min_seeders { | ||
| query.append_pair("tor[minSeeders]", &min_seeders.to_string()); | ||
| } | ||
| if let Some(max_seeders) = self.filter.max_seeders { | ||
| query.append_pair("tor[maxSeeders]", &max_seeders.to_string()); | ||
| } | ||
| if let Some(min_leechers) = self.filter.min_leechers { | ||
| query.append_pair("tor[minLeechers]", &min_leechers.to_string()); | ||
| } | ||
| if let Some(max_leechers) = self.filter.max_leechers { | ||
| query.append_pair("tor[maxLeechers]", &max_leechers.to_string()); | ||
| } | ||
| if let Some(min_snatched) = self.filter.min_snatched { | ||
| query.append_pair("tor[minSnatched]", &min_snatched.to_string()); | ||
| } | ||
| if let Some(max_snatched) = self.filter.max_snatched { | ||
| query.append_pair("tor[maxSnatched]", &max_snatched.to_string()); | ||
| } | ||
| } | ||
|
|
||
| url.to_string() | ||
| } |
There was a problem hiding this comment.
mam_search() drops supported search state.
This helper never encodes sort_by, edition.media_type, Type::Mine, Type::Uploader(_), or the musicology/radio category sets. A valid TorrentSearch can therefore generate a MaM URL for a materially different search than the rest of the code evaluates. Either map every supported variant here or make the method fail fast on unsupported combinations instead of silently broadening the query.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_core/src/config_impl.rs` around lines 177 - 299, mam_search() currently
omits several search parameters (sort_by, filter.edition.media_type, Type::Mine,
Type::Uploader(_), and the musicology/radio category sets) which causes it to
generate a broadened MaM URL; update TorrentSearch::mam_search to either encode
each of these supported variants into the URL query (map sort_by ->
tor[sort_type]/tor[sort_by] as appropriate, include edition.media_type in the
tor[cat] or tor[media] params, handle Type::Mine and Type::Uploader(_) by adding
the appropriate tor[owner] or tor[user] query keys, and include musicology/radio
categories when iterating categories) or make mam_search return an error/None
when encountering any unsupported variant so callers fail fast; locate and
modify mam_search, the sort_by enum/field, Type enum variants (Mine, Uploader),
and the edition.categories sets (musicology, radio) to add the missing mappings
or early-return behavior.
| pub trait ContextExt { | ||
| fn ssr(&self) -> &SsrBackend; | ||
| fn db(&self) -> &std::sync::Arc<native_db::Database<'static>>; | ||
| fn mam(&self) -> anyhow::Result<std::sync::Arc<mlm_mam::api::MaM<'static>>>; | ||
| fn metadata(&self) -> &std::sync::Arc<metadata::MetadataService>; | ||
| } | ||
|
|
||
| impl ContextExt for Context { | ||
| fn ssr(&self) -> &SsrBackend { | ||
| self.backend | ||
| .as_ref() | ||
| .expect("Backend not initialized") | ||
| .as_any() | ||
| .downcast_ref::<SsrBackend>() | ||
| .expect("Failed to downcast to SsrBackend") | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let me locate and read the file
find . -name "lib.rs" -path "*mlm_core*" -type fRepository: StirlingMouse/MLM
Length of output: 82
🏁 Script executed:
# Check the structure of the mlm_core crate
git ls-files | grep -E "mlm_core.*lib\.rs" | head -20Repository: StirlingMouse/MLM
Length of output: 80
🏁 Script executed:
# Get file size first
stat mlm_core/src/lib.rs 2>/dev/null || find . -name "lib.rs" -path "*mlm_core*" -exec wc -l {} \;Repository: StirlingMouse/MLM
Length of output: 85
🏁 Script executed:
cat -n mlm_core/src/lib.rsRepository: StirlingMouse/MLM
Length of output: 2591
🏁 Script executed:
# Search for Backend trait implementations
rg "impl Backend" --type rust -A 3Repository: StirlingMouse/MLM
Length of output: 234
🏁 Script executed:
# Search for Context instantiation and usage
rg "Context\s*{" --type rust -B 2 -A 5Repository: StirlingMouse/MLM
Length of output: 146
🏁 Script executed:
# Search for calls to these accessor methods (db(), mam(), metadata())
rg "\.db\(\)|\.mam\(\)|\.metadata\(\)" --type rust -B 2 -A 2Repository: StirlingMouse/MLM
Length of output: 27649
🏁 Script executed:
# Find and read context.rs
cat mlm_core/src/context.rsRepository: StirlingMouse/MLM
Length of output: 1044
🏁 Script executed:
# Search for where Context is created/instantiated
rg "Context\s*\{|Context::new|Context {" --type rust -B 2 -A 5Repository: StirlingMouse/MLM
Length of output: 194
🏁 Script executed:
# Search for Context initialization more carefully
rg "backend:" --type rust -B 3 -A 1 | head -100Repository: StirlingMouse/MLM
Length of output: 1273
🏁 Script executed:
# Look at the route setup context with mam()
rg "search_api.*with_state" --type rust -B 5 -A 5Repository: StirlingMouse/MLM
Length of output: 899
🏁 Script executed:
# Search for Context construction with explicit values
rg "Context\s*\{|backend:\s*None" --type rust -A 3 | head -50Repository: StirlingMouse/MLM
Length of output: 1794
Make the SSR lookup fallible.
These public accessors hard-fail on both the missing-backend and wrong-backend cases. The Context::backend field is Option<Arc<dyn Backend>>, so None and non-SsrBackend implementations are type-safe, but the .expect() calls will panic. Since db(), mam(), and metadata() all depend on ssr(), any non-SSR Context will panic instead of returning a recoverable error. Return Result/Option from ssr() and let the call sites unwrap locally.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_core/src/lib.rs` around lines 40 - 55, Change the ContextExt::ssr
accessor to be fallible (return Option<&SsrBackend> or
anyhow::Result<&SsrBackend>) instead of unconditionally panicking: check
Context::backend for None and perform a downcast_ref::<SsrBackend>(), returning
a None/Err on failure rather than calling expect; then update the dependent
accessors db(), mam(), and metadata() to propagate or map that fallible result
(or accept Option) so they no longer call ssr() which panics—handle
unwrapping/errors at each call site where an SSR-specific backend is actually
required.
| now.duration_since(SystemTime::UNIX_EPOCH) | ||
| .unwrap() | ||
| .as_secs() |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
The unwrap() is practically safe but could include context.
duration_since(UNIX_EPOCH) only fails if the system clock is before January 1, 1970—virtually impossible in practice. If you prefer defensive coding, consider using expect() with a message:
🛡️ Suggested improvement
println!(
"cargo:rustc-env=DATE={}",
now.duration_since(SystemTime::UNIX_EPOCH)
- .unwrap()
+ .expect("system clock is before UNIX epoch")
.as_secs()
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| now.duration_since(SystemTime::UNIX_EPOCH) | |
| .unwrap() | |
| .as_secs() | |
| now.duration_since(SystemTime::UNIX_EPOCH) | |
| .expect("system clock is before UNIX epoch") | |
| .as_secs() |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_web_askama/build.rs` around lines 7 - 9, Replace the unsafe unwrap() call
on now.duration_since(SystemTime::UNIX_EPOCH) with a defensive expect() that
includes a clear message; locate the expression using
now.duration_since(SystemTime::UNIX_EPOCH).unwrap().as_secs() in build.rs and
change unwrap() to expect("system time is before UNIX_EPOCH when computing build
timestamp") so the code still returns as_secs() but yields a helpful error if
the impossible occurs.
| native_db = { git = "https://github.com/StirlingMouse/native_db.git", branch = "0.8.x" } | ||
| once_cell = "1.21.3" | ||
| qbit = { git = "https://github.com/StirlingMouse/qbittorrent-webui-api.git" } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Same git dependency pinning concern applies here.
As noted in mlm_core/Cargo.toml, consider pinning native_db and qbit to specific commits for build reproducibility.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_web_askama/Cargo.toml` around lines 17 - 19, The Cargo.toml currently
references git dependencies by branch for native_db and qbit (native_db = { git
= "https://github.com/StirlingMouse/native_db.git", branch = "0.8.x" } and qbit
= { git = "https://github.com/StirlingMouse/qbittorrent-webui-api.git" });
change these to pin explicit commit SHAs using the rev (or tag) field to ensure
reproducible builds (e.g., replace branch = "0.8.x" with rev = "<commit-sha>"
and add rev = "<commit-sha>" for qbit), obtaining the correct commit SHAs from
the respective repositories and updating Cargo.toml accordingly.
| for torrent in batch { | ||
| match tag_filter.filter.matches_lib(torrent) { | ||
| Ok(matches) => { | ||
| if matches { | ||
| matched_torrents.push(torrent); | ||
| } | ||
| } | ||
| Err(err) => { | ||
| let Some(mam_id) = torrent.mam_id else { | ||
| continue; | ||
| }; | ||
| info!("need to ask mam due to: {err}"); | ||
|
|
||
| let permit = mam_semaphore | ||
| .clone() | ||
| .acquire_owned() | ||
| .await | ||
| .map_err(anyhow::Error::new)?; | ||
|
|
||
| let mam = context.mam()?; | ||
| let Some(mam_torrent) = mam.get_torrent_info_by_id(mam_id).await? | ||
| else { | ||
| drop(permit); | ||
| warn!("could not get torrent from mam"); | ||
| continue; | ||
| }; | ||
|
|
||
| drop(permit); | ||
|
|
||
| let new_meta = mam_torrent.as_meta()?; | ||
| if new_meta != torrent.meta { | ||
| update_torrent_meta( | ||
| &config, | ||
| context.db(), | ||
| context.db().rw_async().await?, | ||
| Some(&mam_torrent), | ||
| torrent.clone(), | ||
| new_meta, | ||
| false, | ||
| false, | ||
| &context.events, | ||
| ) | ||
| .await?; | ||
| } | ||
|
|
||
| if tag_filter.filter.matches(&mam_torrent) { | ||
| matched_torrents.push(torrent); | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
MAM requests are processed sequentially, not concurrently.
The semaphore is acquired and released within the same loop iteration, which means MAM requests are still processed sequentially. The MAX_CONCURRENT_MAM_REQUESTS constant suggests concurrent processing was intended.
To achieve actual concurrency, consider spawning tasks and using join_all or similar:
🔧 Suggested approach for concurrent processing
- for torrent in batch {
- match tag_filter.filter.matches_lib(torrent) {
- Ok(matches) => {
- if matches {
- matched_torrents.push(torrent);
- }
- }
- Err(err) => {
- let Some(mam_id) = torrent.mam_id else {
- continue;
- };
- info!("need to ask mam due to: {err}");
-
- let permit = mam_semaphore
- .clone()
- .acquire_owned()
- .await
- .map_err(anyhow::Error::new)?;
-
- let mam = context.mam()?;
- let Some(mam_torrent) = mam.get_torrent_info_by_id(mam_id).await?
- else {
- drop(permit);
- warn!("could not get torrent from mam");
- continue;
- };
- // ... rest of processing
- }
- }
- }
+ // Collect torrents needing MAM lookup
+ let mut mam_futures = vec![];
+ for torrent in batch {
+ match tag_filter.filter.matches_lib(torrent) {
+ Ok(matches) => {
+ if matches {
+ matched_torrents.push(torrent);
+ }
+ }
+ Err(err) => {
+ let Some(mam_id) = torrent.mam_id else {
+ continue;
+ };
+ info!("need to ask mam due to: {err}");
+ let permit = mam_semaphore.clone();
+ let ctx = context.clone();
+ let cfg = config.clone();
+ let t = torrent.clone();
+ mam_futures.push(async move {
+ let _permit = permit.acquire_owned().await?;
+ // Process MAM request...
+ Ok::<_, anyhow::Error>((t, /* result */))
+ });
+ }
+ }
+ }
+ // Process MAM lookups concurrently
+ let results = futures::future::join_all(mam_futures).await;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_web_askama/src/pages/config.rs` around lines 96 - 146, The loop currently
acquires mam_semaphore and does the MAM work inline, causing sequential MAM
requests; refactor the Err branch so it spawns a concurrent task/future (e.g.,
via tokio::spawn or collecting futures for join_all) that captures mam_id,
clones mam_semaphore, context, config, tag_filter (or necessary refs) and within
the task acquires permit = mam_semaphore.clone().acquire_owned().await, performs
mam.get_torrent_info_by_id(mam_id).await, calls update_torrent_meta(...) if
needed, evaluates tag_filter.filter.matches(&mam_torrent) and returns whether
the original torrent should be pushed; then collect task results and merge
matching torrents into matched_torrents after join_all; keep synchronous
matches_lib Ok branch as-is and preserve error handling/logging.
| action => { | ||
| eprintln!("unknown action: {action}"); | ||
| } | ||
| } |
There was a problem hiding this comment.
Unknown action is silently ignored.
Using eprintln! for an unknown action makes debugging difficult in production. Consider returning an error or at least using tracing::warn! for consistency with the rest of the codebase.
🔧 Proposed fix
action => {
- eprintln!("unknown action: {action}");
+ return Err(anyhow::Error::msg(format!("unknown action: {action}")).into());
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_web_askama/src/pages/config.rs` around lines 189 - 192, The match arm
that currently does `eprintln!("unknown action: {action}");` should not silently
print to stderr; replace the `eprintln!` with a structured `tracing::warn!` call
and return an explicit error/BadRequest from the enclosing handler instead of
falling through. Locate the match arm handling `action` (the branch with `action
=> { ... }`) and change it to call `tracing::warn!("unknown action: {}",
action)` and then return an appropriate `Err(...)` or HTTP 400 response
(matching the function's return type) so callers/clients and logs consistently
surface the problem. Ensure the returned error type matches the function
signature (e.g., convert to the handler's Error/Result or return an Actix/warp
HTTP bad request).
| serde-nested-json = "0.1.3" | ||
| sublime_fuzzy = "0.7.0" | ||
| thiserror = "2.0.17" | ||
| serde_json = "1.0" |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Minor version inconsistency with other crates.
serde_json is specified as "1.0" here but "1.0.140" in mlm_core and mlm_web_askama. While Cargo resolves this correctly, using consistent version specs across the workspace improves maintainability.
♻️ Suggested fix
-serde_json = "1.0"
+serde_json = "1.0.140"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| serde_json = "1.0" | |
| serde_json = "1.0.140" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/Cargo.toml` at line 27, The serde_json dependency is declared as "1.0"
which is inconsistent with the more specific "1.0.140" used in mlm_core and
mlm_web_askama; update the serde_json entry in this Cargo.toml to the exact same
version string ("1.0.140") so the workspace uses a consistent version spec
across crates (search for the serde_json line and replace its version to match
mlm_core/mlm_web_askama).
50bbf7c to
c314b40
Compare
8539922 to
a9d18fa
Compare
c314b40 to
1b713a5
Compare
Add Leptos UI Migrate homepage to leptos Improve leptos event page to match askama Add live events Add torrent_detail page to leptos Remove leptos
1b713a5 to
d86ec05
Compare
a9d18fa to
ac1957f
Compare
Stack
Summary by CodeRabbit
New Features
Chores