Skip to content

Add folder linkers for libation and nextory#9

Open
StirlingMouse wants to merge 14 commits intomainfrom
folder-linkers
Open

Add folder linkers for libation and nextory#9
StirlingMouse wants to merge 14 commits intomainfrom
folder-linkers

Conversation

@StirlingMouse
Copy link
Owner

@StirlingMouse StirlingMouse commented Mar 2, 2026

Summary by CodeRabbit

  • New Features

    • Split library linking into separate Torrent and Folder linkers; folder linking adds Libation/Nextory imports and relinking/metadata-refresh actions
    • Broader identifier support (ids map: MAM/ABS/ISBN/ASIN) and source-aware update events
  • Improvements

    • Edition-based filtering and richer library/search filters; improved duplicate ranking and selection
    • UI: conditional MaM/ABS links, separate Torrent/Folder linker controls, optional fields in editors
    • Enhanced title/series/name parsing and metadata normalization
  • Tests

    • Extensive new tests for linking, parsing, and metadata diffing

Stack

@coderabbitai
Copy link

coderabbitai bot commented Mar 2, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Add a v18 DB model with id-map support, migrate public type aliases and many conversions to v18, replace single linker with new torrent and folder linkers, nest edition-based filters in config, move identifier usage to ids map/methods, and add extensive linker, parser, metadata, and integration tests.

Changes

Cohort / File(s) Summary
DB: v18 model & exports
mlm_db/src/v18.rs, mlm_db/src/lib.rs
Introduce v18 data model (Torrent, TorrentMeta, ListItem, etc.), re-export Database, add ids constants, and switch public type aliases and migration wiring to v18.
DB: impls & meta adjustments
mlm_db/src/impls.rs, mlm_db/src/impls/lists.rs, mlm_db/src/impls/meta.rs, mlm_db/src/impls/series.rs
Add lists module; ListItemTorrent::id() helper; TorrentMeta exposes mam_id() and diffs now use ids map; TorrentMetaField gains Ids/Tags; Series gets Display impl.
Legacy conversion layers
mlm_db/src/v03.rs, mlm_db/src/v04.rs, mlm_db/src/v05.rs, mlm_db/src/v08.rs, mlm_db/src/v09.rs, mlm_db/src/v10.rs, mlm_db/src/v13.rs, mlm_db/src/v17.rs
Add numerous From<v18::...> impls and conversion mappings to support older versions, update derives/defaults, and map new ids/fields into legacy shapes.
DB tests
mlm_db/tests/meta_diff.rs
Add tests for TorrentMeta::diff verifying ids, size, title, series, and VIP expiry behavior.
MAM parsing & metadata
mlm_mam/src/meta.rs, mlm_mam/src/search.rs, mlm_mam/src/user_torrent.rs
Normalize names/series/title parsing; replace single mam_id with ids: BTreeMap (mam/ISBN/ASIN), add tags/description/uploaded_at propagation, and adapt mapping/lookup flows.
MAM enums, serde & deps
mlm_mam/src/enums.rs, mlm_mam/src/serde.rs, mlm_mam/Cargo.toml
UserClass now implements FromStr/TryFrom; add serde deserializer tests; enable reqwest json feature and add vendored openssl.
Parsing utilities & tests
mlm_parse/src/lib.rs, mlm_parse/tests/parse_tests.rs
Expose new regexes and parse_series_from_title, extend edition/title parsing and add related unit tests.
Linker split: torrent linker
server/src/linker/mod.rs, server/src/linker/torrent.rs
Introduce torrent linker APIs and MaM trait (MaMApi), relink/refresh flows, file-link planning, library discovery, and orchestration for torrent linking.
Linker: folder linker & utilities
server/src/linker/folder.rs, server/src/linker/common.rs, server/src/linker/duplicates.rs, server/src/linker/libation_cats.rs
Add folder linker (Libation/Nextory parsing), cross-platform file ops (hard_link/copy/symlink, map_path, library_dir), duplicate finding/ranking, and Audible-es category mapping utilities.
Server wiring & stats
server/src/lib.rs, server/src/main.rs, server/src/stats.rs
Expose new server modules, split linker channels into torrent_linker_* and folder_linker_*, update triggers/stats and startup wiring to use separate linkers.
Config / filtering refactor
server/src/config.rs, server/src/config_impl.rs
Introduce EditionFilter, LibraryOptions, LibraryTagFilters; change Library variants to ByRipDir/ByDownloadDir/ByCategory; move matching logic into edition-based filters and add matching helpers.
Autograbber / Snatchlist / Downloader
server/src/autograbber.rs, server/src/snatchlist.rs, server/src/torrent_downloader.rs
Adapt code to edition nesting, make mam_torrent optional in places, use mam_id() accessor, add event source tracking, and remove deprecated abs_id/goodreads_id/request_matadata_update fields.
Audiobookshelf & ABS integration
server/src/audiobookshelf.rs
ABS lookups/updates now use meta.ids[ids::ABS]; metadata payloads source description and ISBN/ASIN from meta; signatures simplified to accept only meta.
Cleaner & duplicate handling
server/src/cleaner.rs
Use rank_torrents for selection, clear ABS via meta.ids, use mam_id() and torrent.id in logs/events, and update removal/file cleanup paths.
Lists, web pages & templates
server/src/lists/*, server/src/web/pages/*, server/templates/*
ListItemTorrent gains torrent_id: Option<_>; templates/pages updated to use torrent.mam_id() or torrent.id(), conditional MaM links, use meta.ids.get(ids::ABS), rename RemovedFromMam → RemovedFromTracker, add relink action and Updated source display, and switch filters to edition.*.
QBittorrent abstraction
server/src/qbittorrent.rs
Add QbitApi trait (async methods) with impls for qbit::Api, &T, and Arc<T>; generalize caching/category helpers to accept the trait.
Tests & test utilities
server/tests/*, server/tests/common/mod.rs
Add many unit/integration tests and test utilities (TestDb, MockFs, MockTorrentBuilder), including linker-folder/torrent tests and cleaner tests.
Removed/replace: old linker
server/src/linker.rs (removed)
Remove monolithic linker.rs; replaced by modular server/src/linker/{common,duplicates,folder,torrent,libation_cats} implementation.

Sequence Diagram(s)

sequenceDiagram
  participant Qbit as Qbit (client)
  participant Linker as TorrentLinker
  participant MaM as MaM API
  participant DB as Database
  participant FS as Filesystem

  Qbit->>Linker: request torrents / files
  Linker->>Qbit: fetch torrents/files
  Linker->>MaM: get_torrent_info(id/hash)
  MaM-->>Linker: TorrentMeta (with ids)
  Linker->>DB: upsert/lookup Torrent & Meta
  DB-->>Linker: existing torrent(s)
  Linker->>FS: calculate link plan & perform hardlink/copy/symlink
  FS-->>Linker: file ops result
  Linker->>DB: record events (Linked/Updated/Removed)
  DB-->>Linker: ack
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

"I hopped through code with ears held high,
New IDs and linkers beneath the sky,
Two paths to link, old versions made friends,
Tests aplenty—my burrow mends,
I thumped a drum for every file and line!" 🐇✨

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch folder-linkers

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 35

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
mlm_db/src/impls/meta.rs (1)

40-107: ⚠️ Potential issue | 🟡 Minor

diff() is missing tags comparison support.

TorrentMetaField::Tags exists, but tag mutations are never pushed into the diff output.

🧩 Proposed fix
         if self.categories != other.categories {
             diff.push(TorrentMetaDiff {
                 field: TorrentMetaField::Categories,
                 from: self.categories.join(", "),
                 to: other.categories.join(", "),
             });
         }
+        if self.tags != other.tags {
+            diff.push(TorrentMetaDiff {
+                field: TorrentMetaField::Tags,
+                from: self.tags.join(", "),
+                to: other.tags.join(", "),
+            });
+        }
         if self.language != other.language {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_db/src/impls/meta.rs` around lines 40 - 107, The diff() implementation in
TorrentMeta (function diff) never compares the tags field, so changes to tags
aren't emitted; add a tags comparison similar to the categories block: check if
self.tags != other.tags and, when different, push a TorrentMetaDiff with field
TorrentMetaField::Tags and use a human-readable from/to (e.g., self.tags.join(",
") and other.tags.join(", ") or map/unwrap if tags is Option) so tag mutations
are recorded; insert this new block alongside the other field checks (for
example before the language check).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@mlm_db/src/impls/lists.rs`:
- Around line 4-8: The id() accessor currently returns a String and collapses
missing torrent_id/mam_id to "", causing identifier collisions; change its
signature from pub fn id(&self) -> String to pub fn id(&self) -> Option<String>,
remove the final unwrap_or_default(), and return the combined Option
(self.torrent_id.clone().or_else(|| self.mam_id.map(|id| id.to_string()))).
Update all call sites that assume a String to handle Option<String> (e.g.,
propagate the option, return None, or explicitly handle missing IDs) so missing
IDs are represented as None instead of an empty string.

In `@mlm_db/src/v03.rs`:
- Around line 231-235: The Default implementation for Timestamp currently
returns the current time via Timestamp::now(), which is non-deterministic; add a
brief doc comment on the impl (or on Timestamp::default()) clarifying that
Default::default() yields the current time (not epoch/zero) to make this
intentional behavior explicit and reference the analogous Uuid::default()
pattern for consistency.

In `@mlm_db/src/v04.rs`:
- Around line 294-301: The From<v18::ListItemTorrent> for ListItemTorrent impl
uses t.mam_id.unwrap() in the conversion; either make this downgrade explicit by
replacing the unwrap with unimplemented!() to indicate the conversion path is
not supported, or add a concise comment above the impl (referencing
From<v18::ListItemTorrent> for ListItemTorrent and the field mam_id) explaining
why mam_id is guaranteed Some in practice and why this code path is unreachable,
so readers understand the safety of the unwrap.

In `@mlm_db/src/v10.rs`:
- Around line 384-390: The From implementation for v18::MetadataSource in impl
From<v18::MetadataSource> for MetadataSource currently uses a wildcard arm that
silently maps any future v18 variants to Manual; change this to handle each
v18::MetadataSource variant explicitly (list out v18::MetadataSource::Mam,
::Manual and any other known variants) and either map them intentionally or use
a clear compatibility path (e.g., an explicit fallback arm that contains a
comment and/or an unreachable!/panic!/log to mark lossy downgrade) so that
conversion in the From::from for MetadataSource is explicit and future enum
additions won’t be silently misclassified. Ensure you reference
v18::MetadataSource and the impl From<v18::MetadataSource> for MetadataSource
while making the mapping explicit.

In `@mlm_db/src/v17.rs`:
- Line 545: The match arm for v18::TorrentMetaField::Tags currently calls
unimplemented!(), causing a runtime panic if a Tags value is encountered;
replace it with a safe conversion or an explicit unreachable handler. Locate the
conversion match that handles v18::TorrentMetaField (the arm
`v18::TorrentMetaField::Tags`) and either implement a mapping into the
appropriate v17 representation (e.g., convert tags into the equivalent v17 field
type or a Vec<String> as required by the v17 struct) or, if Tags truly cannot
appear, replace unimplemented!() with unreachable!("v18::TorrentMetaField::Tags
is not expected in conversion: <reason>") and add a short comment/documentation
proving why this branch is unreachable. Ensure the chosen fix uses the exact
symbol v18::TorrentMetaField::Tags and updates any return types or constructors
used by the surrounding conversion function accordingly.

In `@mlm_mam/src/meta.rs`:
- Around line 96-161: Remove the large commented-out Category enum block (the
commented declaration and all listed variants of Category, including
Unknown(u8)) from meta.rs to eliminate dead-code noise; if preservation is
needed, move that entire block into project docs or a HISTORY/CHANGES file
instead of leaving it commented in the source.

In `@mlm_mam/src/search.rs`:
- Around line 417-427: parse_isbn can return empty/whitespace-only strings
(e.g., "   " or "ASIN:"), which then get inserted into the IDs map; update
parse_isbn (function parse_isbn and its use of MaMTorrent.isbn) to trim
isbn_raw, then only return Some(&str) for isbn or asin when the trimmed value is
non-empty — for isbn skip values that are empty or start with "ASIN:"; for asin
strip_prefix("ASIN:"), trim and return None if the result is empty. Ensure both
branches return Option<&str> that is None for empty post-trim values.

In `@mlm_mam/src/user_torrent.rs`:
- Around line 168-171: The current closure casts c.id with `c.id as u8` which
truncates out-of-range IDs; update the mapping in the iterator (the closure
using Category::from_id and MetaError::UnknownCat) to perform a checked
conversion using `u8::try_from(c.id)` and propagate or map the conversion error
into a suitable MetaError so out-of-range IDs fail explicitly (preserve the
original c.id value in the error). Replace the direct cast in the closure with
the try_from call, then call Category::from_id on the resulting u8 and map
errors to MetaError::UnknownCat containing the original ID.

In `@mlm_parse/src/lib.rs`:
- Around line 435-448: The two test functions
test_parse_series_pattern5_standalone_novel and
test_parse_series_pattern6_collection are empty and must be turned into real
assertions: call the existing parsing function (e.g., parse_series or whatever
public parse function in lib.rs is used by other tests) with representative
inputs that exercise Pattern 5 (titles containing "Standalone" or "Novel" and a
preceding series name) and Pattern 6 (titles containing "collection") and assert
the returned series/flags match expected values; add both positive cases (where
a match is expected) and at least one negative case to ensure the greedy
behavior is handled, using the same assertion style as other tests in this file
so the tests fail on regressions.
- Around line 141-147: The code currently forces `series_number` to be present
by using `?` on the chained `captures.get(...)` call, which drops optional
series-number matches; change that binding to produce an Option (e.g., let
series_number = captures.get(2).or(...).or(...); without `?`) and then match on
that Option: if Some(s) parse/normalize to f32 (or map to None on parse failure)
and produce Some((title, Some(number))); if None produce Some((title, None)).
Apply the same change to the second occurrence that handles the other
series-number branch (the block around the later `series_number` binding at
165-168), ensuring both paths treat the capture as optional and only return None
for the whole function on unrecoverable errors, not when the series number is
simply absent.

In `@server/src/autograbber.rs`:
- Around line 784-794: The Updated event currently sets mam_id from mam_torrent
only, losing mam_id when mam_torrent is None; change the mam_id determination so
it first uses mam_torrent.id when present and otherwise falls back to the mam_id
available on the metadata (the meta variable) before calling
write_event/Event::new for EventType::Updated, ensuring mam_id remains populated
for mam_id-indexed retrieval.

In `@server/src/cleaner.rs`:
- Around line 47-52: The code can panic when an empty batch is passed to
run_library_cleaner because batch.remove(0) is called without guarding
emptiness; update the guard around the batch to return early if batch.is_empty()
(or use batch.len() <= 1) before calling rank_torrents and before using
batch.remove(0) so that the subsequent use of keep and the loop over batch never
executes on an empty vector; update the check near the top of the block that
currently only returns for len() == 1 to handle empty batches as well and ensure
rank_torrents and the remove(0) call are only reached when batch has at least
one element.

In `@server/src/config_impl.rs`:
- Around line 377-393: The allow_audio and allow_ebook helpers currently only
inspect edition.categories and can return true even when edition.media_type
explicitly forbids that format; update both functions to also check
g.edition.media_type inside the closure (in addition to categories.audio /
categories.ebook) and only count a grab as allowing the format when media_type
is None or explicitly permits that format. Concretely, modify the closure in
allow_audio and allow_ebook to short-circuit false if g.edition.media_type
explicitly excludes "audio" / "ebook" (or does not include the format, depending
on how media_type is represented), otherwise fall back to the existing
categories check.

In `@server/src/linker/folder.rs`:
- Around line 221-222: The num_files field is only using audio_files.len() which
undercounts when eBook files are present; update the assignment for num_files to
count all files (e.g., the combined collection or the primary files vector used
to compute size/filetypes rather than audio_files) so it reflects total files,
and apply the same fix at the other occurrence around the num_files assignment
near the later block (the second occurrence at the referenced location).
- Around line 414-435: The loop that creates links only iterates over
audio_files, so ebook files (ebook_files) are never linked into the library;
update the code to process ebook_files the same way as audio_files by adding an
equivalent loop (or merging them) that uses library.options().method and calls
hard_link, copy, or symlink as done for audio_files, and also pushes the ebook
PathBufs into library_files so they are tracked; reference the existing symbols
audio_files, ebook_files, library_files, LibraryLinkMethod, hard_link, copy, and
symlink in your change.
- Around line 34-37: The loop over entries from read_dir(&l.rip_dir) must skip
non-directories to avoid calling read_dir on a file; update the while let
Some(folder) = entries.next_entry().await? loop to check the entry’s file_type
(e.g., folder.file_type().await?.is_dir()) or metadata and continue if not a
directory before calling link_folder(&config, library, &db, folder). This
ensures only directory entries under rip_dir are descended into and prevents
read_dir(&folder.path()) from failing.

In `@server/src/linker/mod.rs`:
- Around line 1-8: The folder module exports useful public items that aren't
re-exported from the parent mod; add re-exports for the commonly used symbols to
match the pattern used for common/duplicates/torrent by updating the module's
pub use list to include folder items such as link_folders_to_library and the
metadata structs (e.g., Libation, NextoryRaw, and any other public
types/functions you expect to be used externally) so callers can import them via
the top-level linker module (use self::folder::{link_folders_to_library,
Libation, NextoryRaw, ...}).

In `@server/src/linker/torrent.rs`:
- Around line 87-90: The code currently calls components.pop().expect(...) on
torrent_path which will panic for empty/malformed paths; change this to handle
the missing component gracefully by matching the result of components.pop() (or
using components.pop().ok_or_else(...)?), and return or propagate a recoverable
error (e.g., Err with context) from the enclosing function (or log and skip
processing) instead of panicking; update references to file_name_component to
use the matched value and ensure the function signature and callers handle the
new Result/Option return appropriately so malformed torrent_path values no
longer crash the linker task.
- Around line 191-194: The RemovedFromTracker event is being created with mam_id
set to None; change the Event::new call that currently passes None as the second
argument to preserve the torrent's mam_id (e.g. pass qbit_torrent.mam_id.clone()
or the appropriate Option-wrapped mam_id field) so
Event::new(Some(qbit_torrent.hash.clone()), qbit_torrent.mam_id.clone(),
EventType::RemovedFromTracker, ...) retains the mam_id in the emitted event.

In `@server/src/lists/mod.rs`:
- Around line 171-177: The early-return logic in the add/update path incorrectly
treats field.mam_id == torrent.mam_id as "same torrent" even when both are None;
update the condition that checks TorrentStatus::Existing to only return false
when mam_id values are both Some and equal (i.e., require a Some(...) ==
Some(...) match) before treating them as identical. Locate the check using
TorrentStatus, field.mam_id, and torrent.mam_id and update it so unrelated
torrents with None mam_id don't short-circuit; ensure ListItemTorrent::replace
still runs when mam_id is None on both sides.

In `@server/src/main.rs`:
- Around line 616-618: The context label for the folder-linker error is
incorrect: update the .context() call on the result of
link_folders_to_library(config.clone(), db.clone()).await so it uses
"link_folders_to_library" instead of "link_torrents_to_library" (i.e., change
the error context string associated with the link_folders_to_library call).

In `@server/src/snatchlist.rs`:
- Around line 303-313: The event currently sets mam_id from mam_torrent only,
dropping a possible mam association stored on torrent; change the mam_id binding
so it falls back to the torrent's mam_id when mam_torrent is None (e.g. replace
the current mam_torrent.map(|m| m.id) with an expression that returns
mam_torrent.map(|m| m.id).or(torrent.mam_id.clone())), so write_event /
Event::new / EventType::Updated still receive the preserved mam_id even if
mam_torrent is absent.

In `@server/src/web/api/torrents.rs`:
- Around line 203-219: Replace the cryptic JSON keys and remove commented-out
fields in the Json(json!({...})) response: change "goomer" and "fs" to
descriptive keys (e.g., "goomer_uploads" and "fastsquash_uploads" or whatever
domain-accurate names map to the variables uploaded and uploaders) inside the
Json(json!({...})) expression and delete the commented lines (the commented
"torrents", "uploaded", "uploaders", "size", etc.) so the returned object is
self-documenting; keep the value bindings (uploaded, uploaders) unchanged and
only adjust the property names and cleanup the commented code.
- Around line 24-99: Remove the large block of commented-out
debug/alternate-implementation code and dead declarations (the commented
sections that define and use all_buddies, uploaded loop, size/categories/maps,
m4b/mp3/epub/mobi strings, and other commented-for loops), leaving only the
necessary active declarations (e.g., the existing let mut uploaded =
BTreeMap::new(); and let mut uploaders = BTreeMap::new();). Ensure you do not
remove any live logic that follows these declarations—just delete the
commented-out code to restore readability and rely on VCS history for retrieval.
- Around line 100-156: The loop in torrents.rs currently hardcodes linker names
("Goomer", "FastSquash") when updating uploaded/uploaders and leaves a large
commented-out block; change the handler to accept a configurable linker filter
(e.g., a query parameter like filter_linker or read from config) and replace
direct comparisons t.linker.as_deref() == Some("Goomer") / Some("FastSquash")
with checks against that configurable value (e.g., compare to the supplied
filter or skip when no filter), and remove the unused commented-out code between
the loop (previously lines ~112-155) to keep the function clean; update
references to uploaded/uploaders logic to use the new filter variable (e.g.,
t.linker, uploaded, uploaders) so behavior is driven by input rather than
hardcoded strings.
- Around line 1-15: Remove the unused imports flagged by the reviewer: delete
BTreeSet from the std::collections import, remove utc_datetime from the
time::macros import, and drop Category from the crate::data import list since
references to them are commented out; update the use blocks in
server/src/web/api/torrents.rs so only used symbols (BTreeMap, Arc, Json, State,
Database, serde_json::json, Config, Torrent, TorrentKey, AppError, MaMState)
remain.
- Around line 17-19: The function torrents_api extracts State((config, db, mam))
but never uses config or mam; update the State pattern to avoid unused-variable
warnings by either renaming them to underscore-prefixed names (e.g.,
State((_config, db, _mam))) or replace the pattern with wildcards (e.g.,
State((_, db, _))) while keeping the db binding as-is; modify the function
signature where State is deconstructed (the torrents_api parameter) to apply one
of these changes so only db remains an active variable.

In `@server/src/web/pages/replaced.rs`:
- Line 13: The import statement in replaced.rs includes an unused symbol `ids`
from mlm_db; remove `ids` from the use declaration (the line importing Language,
Torrent, TorrentKey, ids) so it becomes just Language, Torrent, TorrentKey, or
alternatively use `ids` where intended; update the `use mlm_db::{Language,
Torrent, TorrentKey, ids};` to drop `ids` to eliminate the unused import.

In `@server/src/web/pages/search.rs`:
- Line 50: The lookup currently uses the optional parsed metadata meta.mam_id(),
which can be missing/unparsable and cause missed DB matches; change the
selection to use the stable ID mam_torrent.id as the primary key (or at minimum
fall back to it) when querying Torrent via TorrentKey::mam_id so that code paths
referencing mam_torrent.id (e.g., in the selection flow that expects
mam_torrent.id) consistently find the existing row and prevent duplicate/“not
selected” states.

In `@server/src/web/pages/torrent_edit.rs`:
- Line 44: Remove the unnecessary MaM dependency from the manual edit POST
handler by eliminating the unconditional call to context.mam()?; specifically
remove or guard the line let _mam = context.mam()? so the handler no longer
fails when MaM is unavailable (the handler already passes None at the call site
on line 110), and only call context.mam() where MaM data is actually required.

In `@server/src/web/pages/torrent_goodreads.rs`:
- Around line 84-90: The Content-Type for the download is hardcoded to text/toml
in the headers array; change it to a binary MIME (e.g.,
"application/octet-stream") so arbitrary audiobook/ebook files are served
correctly. Update the headers construction in torrent_goodreads.rs (the headers
variable) to use header::CONTENT_TYPE => "application/octet-stream;
charset=utf-8" or just "application/octet-stream", keeping the existing
CONTENT_DISPOSITION formatting with filename so clients treat the response as a
downloadable binary file.
- Around line 424-433: The code builds torrent_tags as the set of tags to remove
but mistakenly passes form.tags to qbit.remove_tags; change the call to pass the
actual tags-to-remove (torrent_tags) instead. In the block around
qbit.remove_tags, replace form.tags.iter().map(Deref::deref).collect() with
torrent_tags.iter().map(|s| s.as_str()).collect::<Vec<&str>>() (or Vec<String>
if remove_tags expects owned strings) so qbit.remove_tags(Some(vec![&id]),
torrent_tags_iter) is invoked; keep the surrounding logic and await/propagate
the error as before.

In `@server/templates/pages/selected.html`:
- Line 171: The external MaM anchor (<a href="https://www.myanonamouse.net/t/{{
torrent.mam_id }}" target=_blank>) opens in a new tab without rel attributes —
add rel="noopener noreferrer" to that anchor to prevent the opened page from
getting access to window.opener and to mitigate potential security/privacy risks
while keeping the existing target=_blank behavior.

In `@server/templates/pages/torrents.html`:
- Line 230: The anchor created inside the template conditional `{% if let
Some(mam_id) = torrent.mam_id %}` currently uses `target=_blank` without a rel
attribute; update the link (`<a href="https://www.myanonamouse.net/t/{{ mam_id
}}" target=_blank>MaM</a>`) to include `rel="noopener noreferrer"` to prevent
window.opener attacks and referrer leakage when opening the external
MyAnonamouse URL.

In `@server/tests/linker_torrent_test.rs`:
- Around line 315-322: The test currently discards the result of
link_torrents_to_library, hiding failures; change the call to capture and assert
the result (e.g., let res = link_torrents_to_library(config.clone(),
db.db.clone(), (qbit_config, &mock_qbit), &mock_mam).await; assert!(res.is_ok(),
"link_torrents_to_library failed: {:?}", res.err()); or unwrap/expect the result
(res.expect("link_torrents_to_library failed"))—alternatively make the test
return Result and use ? on link_torrents_to_library to propagate errors;
reference link_torrents_to_library, mock_qbit, mock_mam to locate the call.

---

Outside diff comments:
In `@mlm_db/src/impls/meta.rs`:
- Around line 40-107: The diff() implementation in TorrentMeta (function diff)
never compares the tags field, so changes to tags aren't emitted; add a tags
comparison similar to the categories block: check if self.tags != other.tags
and, when different, push a TorrentMetaDiff with field TorrentMetaField::Tags
and use a human-readable from/to (e.g., self.tags.join(", ") and
other.tags.join(", ") or map/unwrap if tags is Option) so tag mutations are
recorded; insert this new block alongside the other field checks (for example
before the language check).

ℹ️ Review info

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4a3fc24 and c0e3156.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (74)
  • mlm_db/src/impls.rs
  • mlm_db/src/impls/lists.rs
  • mlm_db/src/impls/meta.rs
  • mlm_db/src/impls/series.rs
  • mlm_db/src/lib.rs
  • mlm_db/src/v03.rs
  • mlm_db/src/v04.rs
  • mlm_db/src/v05.rs
  • mlm_db/src/v08.rs
  • mlm_db/src/v09.rs
  • mlm_db/src/v10.rs
  • mlm_db/src/v13.rs
  • mlm_db/src/v17.rs
  • mlm_db/src/v18.rs
  • mlm_db/tests/meta_diff.rs
  • mlm_mam/Cargo.toml
  • mlm_mam/src/enums.rs
  • mlm_mam/src/meta.rs
  • mlm_mam/src/search.rs
  • mlm_mam/src/serde.rs
  • mlm_mam/src/user_torrent.rs
  • mlm_parse/src/lib.rs
  • mlm_parse/tests/parse_tests.rs
  • server/Cargo.toml
  • server/src/audiobookshelf.rs
  • server/src/autograbber.rs
  • server/src/cleaner.rs
  • server/src/config.rs
  • server/src/config_impl.rs
  • server/src/lib.rs
  • server/src/linker.rs
  • server/src/linker/common.rs
  • server/src/linker/duplicates.rs
  • server/src/linker/folder.rs
  • server/src/linker/mod.rs
  • server/src/linker/torrent.rs
  • server/src/lists/goodreads.rs
  • server/src/lists/mod.rs
  • server/src/main.rs
  • server/src/qbittorrent.rs
  • server/src/snatchlist.rs
  • server/src/stats.rs
  • server/src/torrent_downloader.rs
  • server/src/web/api/torrents.rs
  • server/src/web/mod.rs
  • server/src/web/pages/config.rs
  • server/src/web/pages/duplicate.rs
  • server/src/web/pages/events.rs
  • server/src/web/pages/index.rs
  • server/src/web/pages/replaced.rs
  • server/src/web/pages/search.rs
  • server/src/web/pages/torrent.rs
  • server/src/web/pages/torrent_edit.rs
  • server/src/web/pages/torrent_goodreads.rs
  • server/src/web/pages/torrents.rs
  • server/templates/pages/config.html
  • server/templates/pages/duplicate.html
  • server/templates/pages/errors.html
  • server/templates/pages/events.html
  • server/templates/pages/index.html
  • server/templates/pages/list.html
  • server/templates/pages/replaced.html
  • server/templates/pages/selected.html
  • server/templates/pages/torrent.html
  • server/templates/pages/torrent_edit.html
  • server/templates/pages/torrent_mam.html
  • server/templates/pages/torrents.html
  • server/templates/partials/filter.html
  • server/templates/partials/mam_torrents.html
  • server/tests/cleaner_test.rs
  • server/tests/common/mod.rs
  • server/tests/config_impl.rs
  • server/tests/linker_folder_test.rs
  • server/tests/linker_torrent_test.rs
💤 Files with no reviewable changes (1)
  • server/src/linker.rs

Comment on lines +4 to +8
pub fn id(&self) -> String {
self.torrent_id
.clone()
.or_else(|| self.mam_id.map(|id| id.to_string()))
.unwrap_or_default()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not collapse missing IDs to an empty string.

Returning "" for missing torrent_id/mam_id makes distinct items share the same identifier and can cause invalid links or dedupe collisions. Prefer returning Option<String> and handling absence at call sites.

Proposed change
 impl ListItemTorrent {
-    pub fn id(&self) -> String {
+    pub fn id(&self) -> Option<String> {
         self.torrent_id
             .clone()
-            .or_else(|| self.mam_id.map(|id| id.to_string()))
-            .unwrap_or_default()
+            .or_else(|| self.mam_id.map(|id| id.to_string()))
     }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub fn id(&self) -> String {
self.torrent_id
.clone()
.or_else(|| self.mam_id.map(|id| id.to_string()))
.unwrap_or_default()
pub fn id(&self) -> Option<String> {
self.torrent_id
.clone()
.or_else(|| self.mam_id.map(|id| id.to_string()))
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_db/src/impls/lists.rs` around lines 4 - 8, The id() accessor currently
returns a String and collapses missing torrent_id/mam_id to "", causing
identifier collisions; change its signature from pub fn id(&self) -> String to
pub fn id(&self) -> Option<String>, remove the final unwrap_or_default(), and
return the combined Option (self.torrent_id.clone().or_else(||
self.mam_id.map(|id| id.to_string()))). Update all call sites that assume a
String to handle Option<String> (e.g., propagate the option, return None, or
explicitly handle missing IDs) so missing IDs are represented as None instead of
an empty string.

Comment on lines +231 to +235
impl Default for Timestamp {
fn default() -> Self {
Self::now()
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Timestamp::default() returning the current time may be surprising.

Using Timestamp::now() as the default means the value is non-deterministic, which could lead to flaky tests or unexpected behavior when Default::default() is used to construct structs. However, this pattern is consistent with Uuid::default() (line 266-269) which also returns a new random value, so it aligns with existing conventions in this codebase.

If this is intentional for struct initialization convenience, consider adding a brief doc comment to clarify that default() returns the current time rather than a zero/epoch value.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_db/src/v03.rs` around lines 231 - 235, The Default implementation for
Timestamp currently returns the current time via Timestamp::now(), which is
non-deterministic; add a brief doc comment on the impl (or on
Timestamp::default()) clarifying that Default::default() yields the current time
(not epoch/zero) to make this intentional behavior explicit and reference the
analogous Uuid::default() pattern for consistency.

Comment on lines +384 to +390
impl From<v18::MetadataSource> for MetadataSource {
fn from(t: v18::MetadataSource) -> Self {
match t {
v18::MetadataSource::Mam => Self::Mam,
v18::MetadataSource::Manual => Self::Manual,
_ => Self::Manual,
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Make lossy source downgrade explicit.

The wildcard arm silently maps every non-Mam/Manual v18 source to Manual. That can hide future enum additions and obscure provenance loss. Prefer explicit variant handling (or an explicit compatibility comment for intentionally unreachable downgrade paths).

Based on learnings: In the MLM codebase, migrations only progress forward (upgrades), and newer-to-older conversion behavior should be explicitly scoped/documented to avoid reachable runtime surprises.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_db/src/v10.rs` around lines 384 - 390, The From implementation for
v18::MetadataSource in impl From<v18::MetadataSource> for MetadataSource
currently uses a wildcard arm that silently maps any future v18 variants to
Manual; change this to handle each v18::MetadataSource variant explicitly (list
out v18::MetadataSource::Mam, ::Manual and any other known variants) and either
map them intentionally or use a clear compatibility path (e.g., an explicit
fallback arm that contains a comment and/or an unreachable!/panic!/log to mark
lossy downgrade) so that conversion in the From::from for MetadataSource is
explicit and future enum additions won’t be silently misclassified. Ensure you
reference v18::MetadataSource and the impl From<v18::MetadataSource> for
MetadataSource while making the mapping explicit.

v18::TorrentMetaField::Series => TorrentMetaField::Series,
v18::TorrentMetaField::Source => TorrentMetaField::Source,
v18::TorrentMetaField::Edition => TorrentMetaField::Edition,
v18::TorrentMetaField::Tags => unimplemented!(),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

unimplemented!() on Tags is a runtime panic trap.

If any converted diff contains Tags, this conversion panics. Either provide a safe mapping strategy or explicitly prove/document this branch is unreachable in production paths.

Based on learnings: reverse-version conversion branches may use unimplemented!() only when they are clearly non-executable and documented as such.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_db/src/v17.rs` at line 545, The match arm for v18::TorrentMetaField::Tags
currently calls unimplemented!(), causing a runtime panic if a Tags value is
encountered; replace it with a safe conversion or an explicit unreachable
handler. Locate the conversion match that handles v18::TorrentMetaField (the arm
`v18::TorrentMetaField::Tags`) and either implement a mapping into the
appropriate v17 representation (e.g., convert tags into the equivalent v17 field
type or a Vec<String> as required by the v17 struct) or, if Tags truly cannot
appear, replace unimplemented!() with unreachable!("v18::TorrentMetaField::Tags
is not expected in conversion: <reason>") and add a short comment/documentation
proving why this branch is unreachable. Ensure the chosen fix uses the exact
symbol v18::TorrentMetaField::Tags and updates any return types or constructors
used by the surrounding conversion function accordingly.

Comment on lines +84 to +90
let headers = [
(header::CONTENT_TYPE, "text/toml; charset=utf-8".to_string()),
(
header::CONTENT_DISPOSITION,
format!("attachment; filename=\"{}\"", filename),
),
];
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use a binary MIME type for arbitrary file downloads

Line 85 hardcodes text/toml, but this route serves audiobook/ebook files. This can cause incorrect content handling by clients.

Proposed fix
-    let headers = [
-        (header::CONTENT_TYPE, "text/toml; charset=utf-8".to_string()),
+    let headers = [
+        (header::CONTENT_TYPE, "application/octet-stream".to_string()),
         (
             header::CONTENT_DISPOSITION,
             format!("attachment; filename=\"{}\"", filename),
         ),
     ];
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let headers = [
(header::CONTENT_TYPE, "text/toml; charset=utf-8".to_string()),
(
header::CONTENT_DISPOSITION,
format!("attachment; filename=\"{}\"", filename),
),
];
let headers = [
(header::CONTENT_TYPE, "application/octet-stream".to_string()),
(
header::CONTENT_DISPOSITION,
format!("attachment; filename=\"{}\"", filename),
),
];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/web/pages/torrent_goodreads.rs` around lines 84 - 90, The
Content-Type for the download is hardcoded to text/toml in the headers array;
change it to a binary MIME (e.g., "application/octet-stream") so arbitrary
audiobook/ebook files are served correctly. Update the headers construction in
torrent_goodreads.rs (the headers variable) to use header::CONTENT_TYPE =>
"application/octet-stream; charset=utf-8" or just "application/octet-stream",
keeping the existing CONTENT_DISPOSITION formatting with filename so clients
treat the response as a downloadable binary file.

Comment on lines +424 to +433
for tag in &form.tags {
torrent_tags.remove(tag.as_str());
}
if !torrent_tags.is_empty() {
println!("remove tags {torrent_tags:?}");
qbit.remove_tags(
Some(vec![&id]),
form.tags.iter().map(Deref::deref).collect(),
)
.await?;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Tag removal currently deletes the wrong tag set

torrent_tags is reduced to tags that should be removed, but Line 431 sends form.tags to remove_tags, which are the tags to keep.

Proposed fix
             for tag in &form.tags {
                 torrent_tags.remove(tag.as_str());
             }
             if !torrent_tags.is_empty() {
                 println!("remove tags {torrent_tags:?}");
-                qbit.remove_tags(
-                    Some(vec![&id]),
-                    form.tags.iter().map(Deref::deref).collect(),
-                )
-                .await?;
+                qbit.remove_tags(Some(vec![&id]), torrent_tags.into_iter().collect())
+                    .await?;
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/web/pages/torrent_goodreads.rs` around lines 424 - 433, The code
builds torrent_tags as the set of tags to remove but mistakenly passes form.tags
to qbit.remove_tags; change the call to pass the actual tags-to-remove
(torrent_tags) instead. In the block around qbit.remove_tags, replace
form.tags.iter().map(Deref::deref).collect() with torrent_tags.iter().map(|s|
s.as_str()).collect::<Vec<&str>>() (or Vec<String> if remove_tags expects owned
strings) so qbit.remove_tags(Some(vec![&id]), torrent_tags_iter) is invoked;
keep the surrounding logic and await/propagate the error as before.

</div>
{% endif %}
<div><a href="https://www.myanonamouse.net/t/{{ torrent.meta.mam_id }}" target=_blank>MaM</a></div>
<div><a href="https://www.myanonamouse.net/t/{{ torrent.mam_id }}" target=_blank>MaM</a></div>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Add rel="noopener noreferrer" on the external MaM link.

Line 171 opens a third-party page in a new tab (target=_blank) without rel, which is avoidable security risk.

Suggested patch
-  <div><a href="https://www.myanonamouse.net/t/{{ torrent.mam_id }}" target=_blank>MaM</a></div>
+  <div><a href="https://www.myanonamouse.net/t/{{ torrent.mam_id }}" target=_blank rel="noopener noreferrer">MaM</a></div>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<div><a href="https://www.myanonamouse.net/t/{{ torrent.mam_id }}" target=_blank>MaM</a></div>
<div><a href="https://www.myanonamouse.net/t/{{ torrent.mam_id }}" target=_blank rel="noopener noreferrer">MaM</a></div>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/selected.html` at line 171, The external MaM anchor
(<a href="https://www.myanonamouse.net/t/{{ torrent.mam_id }}" target=_blank>)
opens in a new tab without rel attributes — add rel="noopener noreferrer" to
that anchor to prevent the opened page from getting access to window.opener and
to mitigate potential security/privacy risks while keeping the existing
target=_blank behavior.

<a href="/torrents/{{torrent.id}}">open</a>
<a href="https://www.myanonamouse.net/t/{{ torrent.meta.mam_id }}" target=_blank>MaM</a>
{% if let (Some(abs_url), Some(abs_id)) = (abs_url.as_ref(), torrent.abs_id.as_ref()) %}
{% if let Some(mam_id) = torrent.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank>MaM</a>{% endif %}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Harden external MaM link with rel="noopener noreferrer".

Line 230 opens an external URL with target=_blank without rel, which should be tightened.

Suggested patch
-    {% if let Some(mam_id) = torrent.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank>MaM</a>{% endif %}
+    {% if let Some(mam_id) = torrent.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank rel="noopener noreferrer">MaM</a>{% endif %}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{% if let Some(mam_id) = torrent.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank>MaM</a>{% endif %}
{% if let Some(mam_id) = torrent.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank rel="noopener noreferrer">MaM</a>{% endif %}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/torrents.html` at line 230, The anchor created inside
the template conditional `{% if let Some(mam_id) = torrent.mam_id %}` currently
uses `target=_blank` without a rel attribute; update the link (`<a
href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank>MaM</a>`) to
include `rel="noopener noreferrer"` to prevent window.opener attacks and
referrer leakage when opening the external MyAnonamouse URL.

Repository owner deleted a comment from coderabbitai bot Mar 2, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
server/src/web/pages/config.rs (1)

176-182: ⚠️ Potential issue | 🔴 Critical

Bug: sort_type is computed but search_type is used in the query.

At line 181, the code appends search_type instead of sort_type to the query parameter tor[sort_type]. This appears to be a copy-paste error.

🐛 Proposed fix
             let sort_type = match self.kind {
                 Type::New => "dateDesc",
                 _ => "",
             };
             if !sort_type.is_empty() {
-                query.append_pair("tor[sort_type]", search_type);
+                query.append_pair("tor[sort_type]", sort_type);
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/web/pages/config.rs` around lines 176 - 182, The code computes
sort_type based on self.kind (Type::New -> "dateDesc", otherwise ""), but then
mistakenly appends search_type to the query; update the call to
query.append_pair("tor[sort_type]", ...) to pass sort_type instead of
search_type (the relevant symbols: sort_type, self.kind / Type::New, and
query.append_pair).
server/src/qbittorrent.rs (1)

168-170: 🧹 Nitpick | 🔵 Trivial

Consider adding a clarifying comment for the retry logic.

The retry pattern handles race conditions where concurrent calls may both attempt to create the same category. A brief comment would help future readers understand the intent:

     if let Err(e) = qbit.create_category(category, "").await {
+        // Retry once to handle race conditions where another call created the category
         qbit.create_category(category, "").await.map_err(|_| e)?;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/qbittorrent.rs` around lines 168 - 170, The retry block around
qbit.create_category(category, "").await should include a brief clarifying
comment explaining that the first Err is likely due to a race where another task
created the same category, so the second call retries and maps the original
error on failure; add a one-line comment above this block referencing the
race-condition/retry intent and noting that the second call intentionally
swallows the new error and returns the original error via map_err, so future
readers understand why the duplicate create is retried rather than treated as a
hard failure.
♻️ Duplicate comments (10)
mlm_parse/src/lib.rs (2)

141-146: ⚠️ Potential issue | 🟠 Major

Don't short-circuit when the series number is optional.

Line 141 still uses ? on the number-capture chain, so : A ... Standalone without , Book N and every : ... collection match return None instead of Some((series, None)).

Suggested fix
-        let series_number = captures
-            .get(2)
-            .or(captures.get(4))
-            .or(captures.get(6))
-            .or(captures.get(7))
-            .or(captures.get(10))?;
-        let series_number = match series_number.as_str().to_lowercase().as_str() {
-            "one" | "i" => 1.0,
-            "two" | "ii" => 2.0,
-            "three" | "iii" => 3.0,
-            "four" | "iv" => 4.0,
-            "five" | "v" => 5.0,
-            "six" | "vi" => 6.0,
-            "seven" | "vii" => 7.0,
-            "eight" | "viii" => 8.0,
-            "nine" | "ix" => 9.0,
-            "ten" | "x" => 10.0,
-            n => n.parse().unwrap_or(-1.0),
-        };
-        let series_number = if series_number >= 0.0 {
-            Some(series_number)
-        } else {
-            None
-        };
+        let series_number = captures
+            .get(2)
+            .or(captures.get(4))
+            .or(captures.get(6))
+            .or(captures.get(7))
+            .or(captures.get(10))
+            .and_then(|m| {
+                let n = match m.as_str().to_lowercase().as_str() {
+                    "one" | "i" => Some(1.0),
+                    "two" | "ii" => Some(2.0),
+                    "three" | "iii" => Some(3.0),
+                    "four" | "iv" => Some(4.0),
+                    "five" | "v" => Some(5.0),
+                    "six" | "vi" => Some(6.0),
+                    "seven" | "vii" => Some(7.0),
+                    "eight" | "viii" => Some(8.0),
+                    "nine" | "ix" => Some(9.0),
+                    "ten" | "x" => Some(10.0),
+                    other => other.parse::<f32>().ok(),
+                }?;
+                (n >= 0.0).then_some(n)
+            });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_parse/src/lib.rs` around lines 141 - 146, The series_number extraction
currently uses the ? operator on the captures chain (let series_number =
captures.get(2).or(...).or(...)?) which causes the whole parse to return None
when the optional series number is absent; change it to produce an Option
instead by removing the ? and converting the optional Match to an Option<String>
(e.g., map the captures.get(...).or(... ) result via .map(|m|
m.as_str().to_string()) or similar) so the function returns Some((series, None))
for inputs without a book number; update any downstream usage of series_number
to handle Option accordingly (symbols: series_number, captures).

435-448: ⚠️ Potential issue | 🟡 Minor

Pattern 5 and 6 tests are still no-ops.

These functions only contain comments, so the optional-number/parser regressions above ship completely untested.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_parse/src/lib.rs` around lines 435 - 448, The two tests
test_parse_series_pattern5_standalone_novel and
test_parse_series_pattern6_collection are empty and provide no coverage; replace
their comment-only bodies with real assertions that exercise the parser (call
the crate's series parsing function used elsewhere, e.g. parse_series or the
public parse_title function) using representative input strings for each
pattern, then assert the returned capture/struct fields match expected values
(for pattern 5 ensure the parser captures the series name separately from the
"Standalone"/"Novel" suffix; for pattern 6 ensure the parser captures the
collection name in the expected capture/group index), or if these patterns are
intentionally unsupported, mark the tests #[ignore] or remove them—update the
test bodies to use assert_eq! or assert!(...) so the optional-number/parser
regressions are actually tested.
server/src/autograbber.rs (1)

783-794: ⚠️ Potential issue | 🟡 Minor

Keep mam_id on Updated events when mam_torrent is absent.

If mam_torrent is None, this writes an event with mam_id = None even though meta.mam_id() is still available. That breaks mam_id-indexed event lookup for library-only refreshes.

Suggested fix
-        let mam_id = mam_torrent.map(|m| m.id);
+        let mam_id = mam_torrent.map(|m| m.id).or_else(|| meta.mam_id());
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/autograbber.rs` around lines 783 - 794, The Updated event
currently sets mam_id from mam_torrent.map(|m| m.id) which becomes None for
library-only refreshes even though meta.mam_id() still has the correct id;
change the mam_id assignment used in the Event::new call to prefer
mam_torrent.map(|m| m.id) but fall back to meta.mam_id() (i.e.,
mam_torrent.map(|m| m.id).or_else(|| meta.mam_id())) so
write_event/Event::new/EventType::Updated receives the existing mam_id when
mam_torrent is absent.
server/src/config_impl.rs (1)

377-393: ⚠️ Potential issue | 🟠 Major

allow_audio / allow_ebook ignore explicit media-type restrictions.

These methods only check edition.categories.audio/ebook but don't verify edition.media_type. If media_type explicitly excludes Audiobook/Ebook, these methods could incorrectly return true.

Proposed fix
     pub fn allow_audio(&self) -> bool {
         self.grab
             .iter()
-            .any(|g| match g.edition.categories.audio.as_ref() {
-                None => true,
-                Some(c) => !c.is_empty(),
+            .any(|g| {
+                let media_ok = g.edition.media_type.is_empty()
+                    || g.edition.media_type.contains(&MediaType::Audiobook);
+                media_ok && g.edition.categories.audio.as_ref().is_none_or(|c| !c.is_empty())
             })
     }

     pub fn allow_ebook(&self) -> bool {
         self.grab
             .iter()
-            .any(|g| match g.edition.categories.ebook.as_ref() {
-                None => true,
-                Some(c) => !c.is_empty(),
+            .any(|g| {
+                let media_ok = g.edition.media_type.is_empty()
+                    || g.edition.media_type.contains(&MediaType::Ebook);
+                media_ok && g.edition.categories.ebook.as_ref().is_none_or(|c| !c.is_empty())
             })
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/config_impl.rs` around lines 377 - 393, The allow_audio and
allow_ebook predicates currently only inspect edition.categories and therefore
can return true even when edition.media_type explicitly rules out the
corresponding format; update both functions (allow_audio, allow_ebook) to also
check g.edition.media_type and ensure it is not an explicit non-matching type
(e.g., treat editions whose media_type disallows Audiobook for allow_audio or
disallows Ebook for allow_ebook as false), combining that check with the
existing categories logic so the any(...) closure returns true only when both
the media_type permits the format and the categories allow it.
server/src/main.rs (1)

616-618: ⚠️ Potential issue | 🟡 Minor

Fix the folder-linker error context label.

Line 618 uses "link_torrents_to_library" inside the folder-linker branch, causing folder-link failures to be misclassified in logs/stats.

Proposed fix
                 let result = link_folders_to_library(config.clone(), db.clone())
                     .await
-                    .context("link_torrents_to_library");
+                    .context("link_folders_to_library");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/main.rs` around lines 616 - 618, The error context label is wrong
for the folder-linking call: in the call to link_folders_to_library(...) you
should replace the context string "link_torrents_to_library" with
"link_folders_to_library" so folder-link failures are classified correctly;
update the .context(...) argument on the link_folders_to_library invocation to
use the correct label.
server/src/linker/torrent.rs (2)

191-194: ⚠️ Potential issue | 🟡 Minor

Preserve mam_id on RemovedFromTracker events.

Emitting None here drops the MaM join key, so mam_id-scoped event views can't correlate the removal with the stored torrent.

Suggested fix
         events.push(Event::new(
             Some(qbit_torrent.hash.clone()),
-            None,
+            torrent.meta.mam_id(),
             EventType::RemovedFromTracker,
         ));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/torrent.rs` around lines 191 - 194, The RemovedFromTracker
event is being created with None for the mam_id, dropping the MaM join key;
change the Event::new call that currently passes None (alongside
qbit_torrent.hash and EventType::RemovedFromTracker) to pass the torrent's
mam_id (e.g., Some(qbit_torrent.mam_id.clone()) or the appropriate Some(...)
expression) so the mam_id is preserved on RemovedFromTracker events and
mam_id-scoped views can correlate removals.

87-90: ⚠️ Potential issue | 🟠 Major

Don't panic on malformed torrent file paths.

calculate_library_file_path operates on qBittorrent file names. expect turns bad external input into a linker crash instead of a recoverable fallback.

Suggested fix
-    let file_name_component = components
-        .pop()
-        .expect("torrent file path should not be empty");
+    let Some(file_name_component) = components.pop() else {
+        return PathBuf::from(torrent_file_path);
+    };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/torrent.rs` around lines 87 - 90, The code in
calculate_library_file_path currently calls components().collect() then
.pop().expect(...) on torrent_path which will panic on malformed/external input;
change this to handle the optional value safely by replacing the expect with a
non-panicking branch inside calculate_library_file_path (e.g., match or if let
Some(file_name_component) = components.pop() { ... } else { log a warning via
the existing logger and return a recoverable fallback or Result::Err indicating
a bad path }), and ensure callers of calculate_library_file_path handle the
Result or fallback value accordingly so malformed torrent_path values no longer
crash the linker.
server/src/linker/folder.rs (3)

220-221: ⚠️ Potential issue | 🟡 Minor

Include ebook files in num_files.

size and filetypes already include ebook_files, but num_files still counts audio only. Mixed folders will persist inconsistent metadata.

Suggested fix
-        num_files: audio_files.len() as u64,
+        num_files: (audio_files.len() + ebook_files.len()) as u64,

Apply the same change in both builders.

Also applies to: 288-289

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/folder.rs` around lines 220 - 221, The num_files field is
currently set to audio_files.len() as u64 which omits ebook_files; change
num_files to include both audio and ebook counts by using (audio_files.len() +
ebook_files.len()) as u64 (or equivalent) where the struct is built, and apply
the same fix in the other builder that initializes num_files so both builders
count mixed folders consistently.

34-36: ⚠️ Potential issue | 🟠 Major

Skip non-directory entries before calling link_folder.

link_folder immediately does read_dir(&folder.path()). A regular file under rip_dir will abort the whole folder-linker run instead of being ignored.

Suggested fix
             let mut entries = read_dir(&l.rip_dir).await?;
             while let Some(folder) = entries.next_entry().await? {
+                if !folder.file_type().await?.is_dir() {
+                    trace!("Skipping non-directory rip entry: {}", folder.path().display());
+                    continue;
+                }
                 link_folder(&config, library, &db, folder).await?;
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/folder.rs` around lines 34 - 36, The loop over entries
should skip non-directory DirEntry objects before calling link_folder: after
getting each DirEntry (the variable named folder in this loop), call
folder.file_type().await? (or folder.metadata().await?.is_dir()) and continue if
it's not a directory, then only call link_folder(&config, library, &db,
folder).await?; this prevents regular files under rip_dir from causing read_dir
inside link_folder to fail.

386-395: ⚠️ Potential issue | 🟠 Major

Honor the selected formats when creating folder links.

selected_audio_format/selected_ebook_format are computed but never enforced. The loop links every audio file, skips all ebook files, and still creates a library entry when both selections are None, so the persisted selected_* fields can disagree with library_files and on-disk contents.

Suggested fix
         let selected_ebook_format = select_format(
             &library.options().ebook_types,
             &config.ebook_types,
             &ebook_files,
         );
+        if selected_audio_format.is_none() && selected_ebook_format.is_none() {
+            bail!("Could not find any wanted formats in folder");
+        }
+
+        let matches_selected = |name: &std::ffi::OsStr, selected: Option<&str>| {
+            let name_lower = name.to_string_lossy().to_lowercase();
+            selected.is_some_and(|ext| name_lower.ends_with(ext))
+        };
 
         create_dir_all(&dir).await?;
-        for file in audio_files {
+        for file in audio_files
+            .into_iter()
+            .filter(|file| matches_selected(&file.file_name(), selected_audio_format.as_deref()))
+            .chain(
+                ebook_files
+                    .into_iter()
+                    .filter(|file| matches_selected(&file.file_name(), selected_ebook_format.as_deref())),
+            )
+        {
             let file_path = PathBuf::from(&file.file_name());
             let library_path = dir.join(&file_path);
             library_files.push(file_path.clone());
             let download_path = file.path();

Also applies to: 414-435

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/folder.rs` around lines 386 - 395, The
selected_audio_format and selected_ebook_format returned by select_format are
calculated but never applied when building library_files and folder links;
update the folder link creation logic (the loop that iterates
audio_files/ebook_files and the code that assembles library_files and the
persisted selected_* fields) so that audio files are included only if they match
selected_audio_format, ebook files are included only if they match
selected_ebook_format, and if both selected_* are None or no files match the
chosen formats, skip creating the library entry (or set persisted
selected_audio_format/selected_ebook_format to None to reflect reality). Ensure
you change the code paths referenced by selected_audio_format,
selected_ebook_format, select_format, and library_files (also apply the same fix
to the analogous block around the other occurrence at ~414-435) so persisted
selected_* fields always agree with the actual files linked on disk.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@mlm_db/src/v03.rs`:
- Around line 143-146: The Language enum currently derives Default and marks
English as #[default], which causes Language::default() to silently yield
English; instead either remove the Default derive from Language so code must use
Option<Language> (as used on TorrentMeta) or add a neutral variant (e.g.,
Unknown or Unspecified) and mark that variant with #[default]; update the enum
declaration (remove Default from the derive list or add the new variant and move
the #[default] attribute) and adjust any code relying on Language::default() to
handle Option<Language> or the new neutral variant accordingly.

In `@mlm_mam/Cargo.toml`:
- Around line 16-17: Update the dependency versions in Cargo.toml for openssl
and reqwest to their latest stable releases: change openssl = "0.10.73" (with
features = ["vendored"]) to 0.10.75 and change reqwest = "0.12.20" (with
features = ["json"]) to 0.13.2; after updating, run cargo build and cargo test
and address any breaking changes from reqwest 0.12 -> 0.13 (API/feature name
adjustments) in any code using the reqwest client or async runtime to ensure
compatibility.

In `@mlm_mam/src/serde.rs`:
- Around line 197-271: Add tests that exercise the TryFrom<u64> overflow path by
deserializing numeric JSON values that are in-range for u64 but exceed the
target type so TryFrom fails; specifically, for the helper num_string_or_number
add a case attempting to deserialize a value like 4294967296 into a struct using
#[serde(deserialize_with = "num_string_or_number")] val: u32 and assert the
result is an Err, and for opt_num_string_or_number add a similar case (non-null)
into a struct with val: Option<u16> to ensure the conversion from u64 to the
smaller integer returns an error; reference the functions num_string_or_number
and opt_num_string_or_number to find where to add these failure tests.
- Around line 316-324: The test test_parse_opt_date currently only checks
ok.d.is_some(); update it to assert the exact parsed date value for DateWrap.d
(e.g., equal to 2023-01-02) so the parser can't silently return the wrong date;
keep the null and invalid-date checks but replace the is_some() check with an
equality assertion comparing ok.d to Some(expected_date) (construct
expected_date with the same Date/NaiveDate type used by your DateWrap).

In `@mlm_parse/src/lib.rs`:
- Around line 72-74: The EDITION_START_REGEX alternation is missing "fourth" and
incorrectly has "eight" instead of "eighth", causing prefixes like "Fourth
Anniversary Edition" and "Eighth Revised Edition" to fail; update the static
EDITION_START_REGEX (the Lazy<Regex> initializer) to include "fourth" and
"eighth" in the word-list alternation (the group currently holding
"first|second|third|fifth|sixth|seventh|eight|ninth|tenth|new|revised|updated")
so those ordinal words are matched by the regex branch.

In `@mlm_parse/tests/parse_tests.rs`:
- Around line 11-16: The test test_normalize_title_variants is too loose (uses
starts_with) and can miss regressions in volume/format cleanup; change it to
assert the exact normalized output from normalize_title(s) (e.g., assert_eq!(n,
"amazing and strange 2 light novel")) or otherwise explicitly assert that tokens
like "vol", "vol.", "(" or ")" are not present and that the number and format
are exactly as expected; update the assertion in test_normalize_title_variants
to compare the full string returned by normalize_title.

In `@server/src/autograbber.rs`:
- Around line 685-689: The current meta.ids.extend(torrent.meta.ids.clone())
call appends stored IDs on top of the freshly fetched meta.ids, causing stale
stored IDs to win; change this to merge without overwriting fresh IDs by
iterating over torrent.meta.ids and only pushing IDs that are not already
present in meta.ids (i.e., for each id in torrent.meta.ids, if meta.ids does not
contain a matching key then meta.ids.push(id.clone())); leave the meta.tags and
meta.description handling as-is.
- Around line 734-737: The current backfill sets torrent.linker =
Some(mam_torrent.owner_name.clone()) even when mam_torrent.owner_name is empty,
which can persist Some("") and block future fills; update the condition that
checks linker_is_owner && torrent.linker.is_none() && let Some(mam_torrent) =
mam_torrent to also require that mam_torrent.owner_name.trim() is non-empty (or
!mam_torrent.owner_name.is_empty() after trimming) before assigning to
torrent.linker so only non-blank owner names are stored.

In `@server/src/config.rs`:
- Around line 239-241: The TorrentFilter.edition field is being skipped during
deserialization due to #[serde(skip)], causing EditionFilter to be defaulted and
breaking logic that reads torrent_search.filter.edition in autograbber; remove
the #[serde(skip)] and instead allow the EditionFilter to be deserialized (e.g.,
via #[serde(flatten] on EditionFilter or by ensuring the parent config flattens
into EditionFilter) so that TorrentFilter.edition receives the flattened
category/language/flag/size constraints at deserialization time; update the
TorrentFilter struct definition to accept the edition from the parent and re-run
tests that exercise server::autograbber to verify filters are applied.

In `@server/src/linker/common.rs`:
- Around line 58-79: The code currently calls unwrap() on dir.file_name() inside
the meta.edition and meta.narrators branches (see meta.edition, meta.narrators,
exclude_narrator_in_library_dir, and dir.set_file_name using
sanitize_filename::sanitize), which can panic if dir has no file-name component;
fix by defensively obtaining the base name via dir.file_name().map(|s|
s.to_string_lossy().into_owned()).unwrap_or_else(||
dir.to_string_lossy().into_owned()) (or another safe fallback like an empty
string) and use that safe base when building the formatted string before calling
dir.set_file_name, ensuring no unwrap() remains.

In `@server/src/linker/duplicates.rs`:
- Around line 174-203: The test test_find_matches currently creates and deletes
a tmp_dir manually using std::env::temp_dir(), fs::create_dir_all and
fs::remove_dir_all which can leak on panic; replace that with tempfile::TempDir
(e.g. let tmp = tempfile::tempdir()?; let db_path = tmp.path().join("test.db");)
so the directory is automatically removed when tmp is dropped; remove the
explicit fs::remove_dir_all(tmp_dir) cleanup and keep TempDir in scope for the
duration of the test so the temporary directory lives until the test ends.

In `@server/src/linker/folder.rs`:
- Around line 63-80: The extension checks in folder.rs compare raw
path.extension() values verbatim against config.audio_types and
config.ebook_types, causing mismatches for dotted or mixed-case entries;
normalize the extension string first (e.g., let ext_str =
ext.to_string_lossy().trim_start_matches('.').to_lowercase()) and use ext_str
for comparisons (replace ext == "json" with ext_str == "json" and use
config.audio_types/ebook_types comparisons against ext_str), updating the
branches that push into metadata_files, audio_files, and ebook_files so
mixed-case or dotted config entries match correctly.

In `@server/src/linker/torrent.rs`:
- Around line 76-77: DISK_PATTERN only matches capitalized "CD|Disc|Disk", so
lowercase names like "cd1" are missed; update the Regex used in the static
DISK_PATTERN (created via Lazy and Regex::new) to be case-insensitive (e.g. add
the (?i) flag or equivalent) while preserving the existing capture group for the
disc number so multi-disc directories are detected correctly.
- Around line 129-136: The filter in the for loop that checks
selected_audio_format and selected_ebook_format uses file.name verbatim, causing
mismatches for uppercase filenames; update the predicate in the loop (inside the
for file in files block) to perform a case-insensitive comparison by converting
the filename to lowercase before calling ends_with (e.g., use
file.name.to_lowercase().ends_with(ext) or otherwise lowercasing once into a
variable), keeping the existing selected_audio_format and selected_ebook_format
checks and symbols unchanged.

---

Outside diff comments:
In `@server/src/qbittorrent.rs`:
- Around line 168-170: The retry block around qbit.create_category(category,
"").await should include a brief clarifying comment explaining that the first
Err is likely due to a race where another task created the same category, so the
second call retries and maps the original error on failure; add a one-line
comment above this block referencing the race-condition/retry intent and noting
that the second call intentionally swallows the new error and returns the
original error via map_err, so future readers understand why the duplicate
create is retried rather than treated as a hard failure.

In `@server/src/web/pages/config.rs`:
- Around line 176-182: The code computes sort_type based on self.kind (Type::New
-> "dateDesc", otherwise ""), but then mistakenly appends search_type to the
query; update the call to query.append_pair("tor[sort_type]", ...) to pass
sort_type instead of search_type (the relevant symbols: sort_type, self.kind /
Type::New, and query.append_pair).

---

Duplicate comments:
In `@mlm_parse/src/lib.rs`:
- Around line 141-146: The series_number extraction currently uses the ?
operator on the captures chain (let series_number =
captures.get(2).or(...).or(...)?) which causes the whole parse to return None
when the optional series number is absent; change it to produce an Option
instead by removing the ? and converting the optional Match to an Option<String>
(e.g., map the captures.get(...).or(... ) result via .map(|m|
m.as_str().to_string()) or similar) so the function returns Some((series, None))
for inputs without a book number; update any downstream usage of series_number
to handle Option accordingly (symbols: series_number, captures).
- Around line 435-448: The two tests test_parse_series_pattern5_standalone_novel
and test_parse_series_pattern6_collection are empty and provide no coverage;
replace their comment-only bodies with real assertions that exercise the parser
(call the crate's series parsing function used elsewhere, e.g. parse_series or
the public parse_title function) using representative input strings for each
pattern, then assert the returned capture/struct fields match expected values
(for pattern 5 ensure the parser captures the series name separately from the
"Standalone"/"Novel" suffix; for pattern 6 ensure the parser captures the
collection name in the expected capture/group index), or if these patterns are
intentionally unsupported, mark the tests #[ignore] or remove them—update the
test bodies to use assert_eq! or assert!(...) so the optional-number/parser
regressions are actually tested.

In `@server/src/autograbber.rs`:
- Around line 783-794: The Updated event currently sets mam_id from
mam_torrent.map(|m| m.id) which becomes None for library-only refreshes even
though meta.mam_id() still has the correct id; change the mam_id assignment used
in the Event::new call to prefer mam_torrent.map(|m| m.id) but fall back to
meta.mam_id() (i.e., mam_torrent.map(|m| m.id).or_else(|| meta.mam_id())) so
write_event/Event::new/EventType::Updated receives the existing mam_id when
mam_torrent is absent.

In `@server/src/config_impl.rs`:
- Around line 377-393: The allow_audio and allow_ebook predicates currently only
inspect edition.categories and therefore can return true even when
edition.media_type explicitly rules out the corresponding format; update both
functions (allow_audio, allow_ebook) to also check g.edition.media_type and
ensure it is not an explicit non-matching type (e.g., treat editions whose
media_type disallows Audiobook for allow_audio or disallows Ebook for
allow_ebook as false), combining that check with the existing categories logic
so the any(...) closure returns true only when both the media_type permits the
format and the categories allow it.

In `@server/src/linker/folder.rs`:
- Around line 220-221: The num_files field is currently set to audio_files.len()
as u64 which omits ebook_files; change num_files to include both audio and ebook
counts by using (audio_files.len() + ebook_files.len()) as u64 (or equivalent)
where the struct is built, and apply the same fix in the other builder that
initializes num_files so both builders count mixed folders consistently.
- Around line 34-36: The loop over entries should skip non-directory DirEntry
objects before calling link_folder: after getting each DirEntry (the variable
named folder in this loop), call folder.file_type().await? (or
folder.metadata().await?.is_dir()) and continue if it's not a directory, then
only call link_folder(&config, library, &db, folder).await?; this prevents
regular files under rip_dir from causing read_dir inside link_folder to fail.
- Around line 386-395: The selected_audio_format and selected_ebook_format
returned by select_format are calculated but never applied when building
library_files and folder links; update the folder link creation logic (the loop
that iterates audio_files/ebook_files and the code that assembles library_files
and the persisted selected_* fields) so that audio files are included only if
they match selected_audio_format, ebook files are included only if they match
selected_ebook_format, and if both selected_* are None or no files match the
chosen formats, skip creating the library entry (or set persisted
selected_audio_format/selected_ebook_format to None to reflect reality). Ensure
you change the code paths referenced by selected_audio_format,
selected_ebook_format, select_format, and library_files (also apply the same fix
to the analogous block around the other occurrence at ~414-435) so persisted
selected_* fields always agree with the actual files linked on disk.

In `@server/src/linker/torrent.rs`:
- Around line 191-194: The RemovedFromTracker event is being created with None
for the mam_id, dropping the MaM join key; change the Event::new call that
currently passes None (alongside qbit_torrent.hash and
EventType::RemovedFromTracker) to pass the torrent's mam_id (e.g.,
Some(qbit_torrent.mam_id.clone()) or the appropriate Some(...) expression) so
the mam_id is preserved on RemovedFromTracker events and mam_id-scoped views can
correlate removals.
- Around line 87-90: The code in calculate_library_file_path currently calls
components().collect() then .pop().expect(...) on torrent_path which will panic
on malformed/external input; change this to handle the optional value safely by
replacing the expect with a non-panicking branch inside
calculate_library_file_path (e.g., match or if let Some(file_name_component) =
components.pop() { ... } else { log a warning via the existing logger and return
a recoverable fallback or Result::Err indicating a bad path }), and ensure
callers of calculate_library_file_path handle the Result or fallback value
accordingly so malformed torrent_path values no longer crash the linker.

In `@server/src/main.rs`:
- Around line 616-618: The error context label is wrong for the folder-linking
call: in the call to link_folders_to_library(...) you should replace the context
string "link_torrents_to_library" with "link_folders_to_library" so folder-link
failures are classified correctly; update the .context(...) argument on the
link_folders_to_library invocation to use the correct label.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 48eea1d7-460f-474c-a4ef-5c7bfe52171b

📥 Commits

Reviewing files that changed from the base of the PR and between c0e3156 and 2eb5b91.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (40)
  • mlm_db/src/impls/series.rs
  • mlm_db/src/lib.rs
  • mlm_db/src/v03.rs
  • mlm_db/src/v09.rs
  • mlm_db/src/v13.rs
  • mlm_db/src/v18.rs
  • mlm_db/tests/meta_diff.rs
  • mlm_mam/Cargo.toml
  • mlm_mam/src/serde.rs
  • mlm_parse/src/lib.rs
  • mlm_parse/tests/parse_tests.rs
  • server/Cargo.toml
  • server/src/autograbber.rs
  • server/src/cleaner.rs
  • server/src/config.rs
  • server/src/config_impl.rs
  • server/src/lib.rs
  • server/src/linker.rs
  • server/src/linker/common.rs
  • server/src/linker/duplicates.rs
  • server/src/linker/folder.rs
  • server/src/linker/mod.rs
  • server/src/linker/torrent.rs
  • server/src/lists/mod.rs
  • server/src/main.rs
  • server/src/qbittorrent.rs
  • server/src/snatchlist.rs
  • server/src/stats.rs
  • server/src/web/pages/config.rs
  • server/src/web/pages/index.rs
  • server/src/web/pages/torrent_edit.rs
  • server/templates/pages/config.html
  • server/templates/pages/index.html
  • server/templates/pages/torrent_edit.html
  • server/templates/partials/filter.html
  • server/tests/cleaner_test.rs
  • server/tests/common/mod.rs
  • server/tests/config_impl.rs
  • server/tests/linker_folder_test.rs
  • server/tests/linker_torrent_test.rs
💤 Files with no reviewable changes (1)
  • server/src/linker.rs

Comment on lines 143 to 146
#[derive(Clone, Copy, Debug, Default, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub enum Language {
#[default]
English,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Don't use a concrete language as the type's default.

Lines 143-146 make Language::default() return English, but this file already models “unknown/not provided” with Option<Language> on TorrentMeta. That makes the new Default an easy footgun for generic/default-based code to silently convert missing metadata into English, which can skew downstream filtering/linker behavior. If a Default bound is required elsewhere, use a neutral Unknown/Unspecified variant instead; otherwise, drop Default from Language.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_db/src/v03.rs` around lines 143 - 146, The Language enum currently
derives Default and marks English as #[default], which causes
Language::default() to silently yield English; instead either remove the Default
derive from Language so code must use Option<Language> (as used on TorrentMeta)
or add a neutral variant (e.g., Unknown or Unspecified) and mark that variant
with #[default]; update the enum declaration (remove Default from the derive
list or add the new variant and move the #[default] attribute) and adjust any
code relying on Language::default() to handle Option<Language> or the new
neutral variant accordingly.

Comment on lines +16 to +17
openssl = { version = "0.10.73", features = ["vendored"] }
reqwest = { version = "0.12.20", features = ["json"] }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Verify openssl and reqwest crate versions exist and check for advisories

# Check if openssl 0.10.73 exists on crates.io
echo "=== Checking openssl 0.10.73 ==="
curl -s "https://crates.io/api/v1/crates/openssl/0.10.73" | jq '{name: .version.crate, version: .version.num, published: .version.created_at}'

# Check if reqwest 0.12.20 exists on crates.io
echo "=== Checking reqwest 0.12.20 ==="
curl -s "https://crates.io/api/v1/crates/reqwest/0.12.20" | jq '{name: .version.crate, version: .version.num, published: .version.created_at}'

# Check latest versions for comparison
echo "=== Latest openssl version ==="
curl -s "https://crates.io/api/v1/crates/openssl" | jq '.crate.max_version'

echo "=== Latest reqwest version ==="
curl -s "https://crates.io/api/v1/crates/reqwest" | jq '.crate.max_version'

Repository: StirlingMouse/MLM

Length of output: 395


Consider updating both dependencies to the latest versions.

The changes add the openssl crate with the vendored feature (for consistent cross-platform builds) and enable the json feature for reqwest (for JSON serialization). Both are appropriate for an HTTP client library.

Both versions exist and are valid on crates.io, but are outdated:

  • openssl: 0.10.73 (latest: 0.10.75)
  • reqwest: 0.12.20 (latest: 0.13.2)

Update to the latest stable versions, particularly reqwest 0.13.2 which includes a major version update with potential improvements.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_mam/Cargo.toml` around lines 16 - 17, Update the dependency versions in
Cargo.toml for openssl and reqwest to their latest stable releases: change
openssl = "0.10.73" (with features = ["vendored"]) to 0.10.75 and change reqwest
= "0.12.20" (with features = ["json"]) to 0.13.2; after updating, run cargo
build and cargo test and address any breaking changes from reqwest 0.12 -> 0.13
(API/feature name adjustments) in any code using the reqwest client or async
runtime to ensure compatibility.

Comment on lines +197 to +271
#[derive(Deserialize)]
struct NumWrap {
#[serde(deserialize_with = "num_string_or_number")]
val: u32,
}

#[test]
fn test_num_string_or_number() {
assert_eq!(
serde_json::from_str::<NumWrap>(r#"{"val":"42"}"#)
.unwrap()
.val,
42
);
assert_eq!(
serde_json::from_str::<NumWrap>(r#"{"val":42}"#)
.unwrap()
.val,
42
);
assert!(serde_json::from_str::<NumWrap>(r#"{"val":"notanumber"}"#).is_err());
}

#[derive(Deserialize)]
struct OptNumWrap {
#[serde(deserialize_with = "opt_num_string_or_number")]
val: Option<u64>,
}

#[test]
fn test_opt_num_and_string_or_number() {
assert_eq!(
serde_json::from_str::<OptNumWrap>(r#"{"val":"100"}"#)
.unwrap()
.val,
Some(100)
);
assert_eq!(
serde_json::from_str::<OptNumWrap>(r#"{"val":100}"#)
.unwrap()
.val,
Some(100)
);
assert_eq!(
serde_json::from_str::<OptNumWrap>(r#"{"val":null}"#)
.unwrap()
.val,
None
);

#[derive(Deserialize)]
struct OptStrWrap {
#[serde(deserialize_with = "opt_string_or_number")]
val: Option<String>,
}

assert_eq!(
serde_json::from_str::<OptStrWrap>(r#"{"val":"abc"}"#)
.unwrap()
.val,
Some("abc".to_string())
);
assert_eq!(
serde_json::from_str::<OptStrWrap>(r#"{"val":123}"#)
.unwrap()
.val,
Some("123".to_string())
);
assert_eq!(
serde_json::from_str::<OptStrWrap>(r#"{"val":null}"#)
.unwrap()
.val,
None
);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Add overflow coverage for the TryFrom<u64> branch.

Right now these tests only cover in-range values and parse failures. They do not exercise the separate error path where deserialization reaches u64 successfully but conversion into T fails, which is the critical branch in both generic helpers.

♻️ Suggested test additions
     #[test]
     fn test_num_string_or_number() {
         assert_eq!(
             serde_json::from_str::<NumWrap>(r#"{"val":"42"}"#)
                 .unwrap()
                 .val,
             42
         );
         assert_eq!(
             serde_json::from_str::<NumWrap>(r#"{"val":42}"#)
                 .unwrap()
                 .val,
             42
         );
         assert!(serde_json::from_str::<NumWrap>(r#"{"val":"notanumber"}"#).is_err());
+
+        #[derive(Deserialize)]
+        struct SmallNumWrap {
+            #[serde(deserialize_with = "num_string_or_number")]
+            val: u8,
+        }
+
+        assert!(serde_json::from_str::<SmallNumWrap>(r#"{"val":256}"#).is_err());
     }
@@
     fn test_opt_num_and_string_or_number() {
         assert_eq!(
             serde_json::from_str::<OptNumWrap>(r#"{"val":"100"}"#)
                 .unwrap()
                 .val,
             Some(100)
         );
@@
         assert_eq!(
             serde_json::from_str::<OptStrWrap>(r#"{"val":null}"#)
                 .unwrap()
                 .val,
             None
         );
+
+        #[derive(Deserialize)]
+        struct OptSmallNumWrap {
+            #[serde(deserialize_with = "opt_num_string_or_number")]
+            val: Option<u8>,
+        }
+
+        assert!(serde_json::from_str::<OptSmallNumWrap>(r#"{"val":256}"#).is_err());
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#[derive(Deserialize)]
struct NumWrap {
#[serde(deserialize_with = "num_string_or_number")]
val: u32,
}
#[test]
fn test_num_string_or_number() {
assert_eq!(
serde_json::from_str::<NumWrap>(r#"{"val":"42"}"#)
.unwrap()
.val,
42
);
assert_eq!(
serde_json::from_str::<NumWrap>(r#"{"val":42}"#)
.unwrap()
.val,
42
);
assert!(serde_json::from_str::<NumWrap>(r#"{"val":"notanumber"}"#).is_err());
}
#[derive(Deserialize)]
struct OptNumWrap {
#[serde(deserialize_with = "opt_num_string_or_number")]
val: Option<u64>,
}
#[test]
fn test_opt_num_and_string_or_number() {
assert_eq!(
serde_json::from_str::<OptNumWrap>(r#"{"val":"100"}"#)
.unwrap()
.val,
Some(100)
);
assert_eq!(
serde_json::from_str::<OptNumWrap>(r#"{"val":100}"#)
.unwrap()
.val,
Some(100)
);
assert_eq!(
serde_json::from_str::<OptNumWrap>(r#"{"val":null}"#)
.unwrap()
.val,
None
);
#[derive(Deserialize)]
struct OptStrWrap {
#[serde(deserialize_with = "opt_string_or_number")]
val: Option<String>,
}
assert_eq!(
serde_json::from_str::<OptStrWrap>(r#"{"val":"abc"}"#)
.unwrap()
.val,
Some("abc".to_string())
);
assert_eq!(
serde_json::from_str::<OptStrWrap>(r#"{"val":123}"#)
.unwrap()
.val,
Some("123".to_string())
);
assert_eq!(
serde_json::from_str::<OptStrWrap>(r#"{"val":null}"#)
.unwrap()
.val,
None
);
}
#[derive(Deserialize)]
struct NumWrap {
#[serde(deserialize_with = "num_string_or_number")]
val: u32,
}
#[test]
fn test_num_string_or_number() {
assert_eq!(
serde_json::from_str::<NumWrap>(r#"{"val":"42"}"#)
.unwrap()
.val,
42
);
assert_eq!(
serde_json::from_str::<NumWrap>(r#"{"val":42}"#)
.unwrap()
.val,
42
);
assert!(serde_json::from_str::<NumWrap>(r#"{"val":"notanumber"}"#).is_err());
#[derive(Deserialize)]
struct SmallNumWrap {
#[serde(deserialize_with = "num_string_or_number")]
val: u8,
}
assert!(serde_json::from_str::<SmallNumWrap>(r#"{"val":256}"#).is_err());
}
#[derive(Deserialize)]
struct OptNumWrap {
#[serde(deserialize_with = "opt_num_string_or_number")]
val: Option<u64>,
}
#[test]
fn test_opt_num_and_string_or_number() {
assert_eq!(
serde_json::from_str::<OptNumWrap>(r#"{"val":"100"}"#)
.unwrap()
.val,
Some(100)
);
assert_eq!(
serde_json::from_str::<OptNumWrap>(r#"{"val":100}"#)
.unwrap()
.val,
Some(100)
);
assert_eq!(
serde_json::from_str::<OptNumWrap>(r#"{"val":null}"#)
.unwrap()
.val,
None
);
#[derive(Deserialize)]
struct OptStrWrap {
#[serde(deserialize_with = "opt_string_or_number")]
val: Option<String>,
}
assert_eq!(
serde_json::from_str::<OptStrWrap>(r#"{"val":"abc"}"#)
.unwrap()
.val,
Some("abc".to_string())
);
assert_eq!(
serde_json::from_str::<OptStrWrap>(r#"{"val":123}"#)
.unwrap()
.val,
Some("123".to_string())
);
assert_eq!(
serde_json::from_str::<OptStrWrap>(r#"{"val":null}"#)
.unwrap()
.val,
None
);
#[derive(Deserialize)]
struct OptSmallNumWrap {
#[serde(deserialize_with = "opt_num_string_or_number")]
val: Option<u8>,
}
assert!(serde_json::from_str::<OptSmallNumWrap>(r#"{"val":256}"#).is_err());
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_mam/src/serde.rs` around lines 197 - 271, Add tests that exercise the
TryFrom<u64> overflow path by deserializing numeric JSON values that are
in-range for u64 but exceed the target type so TryFrom fails; specifically, for
the helper num_string_or_number add a case attempting to deserialize a value
like 4294967296 into a struct using #[serde(deserialize_with =
"num_string_or_number")] val: u32 and assert the result is an Err, and for
opt_num_string_or_number add a similar case (non-null) into a struct with val:
Option<u16> to ensure the conversion from u64 to the smaller integer returns an
error; reference the functions num_string_or_number and opt_num_string_or_number
to find where to add these failure tests.

Comment on lines +316 to +324
#[test]
fn test_parse_opt_date() {
let ok = serde_json::from_str::<DateWrap>(r#"{"d":"2023-01-02"}"#).unwrap();
assert!(ok.d.is_some());
let none = serde_json::from_str::<DateWrap>(r#"{"d":null}"#).unwrap();
assert!(none.d.is_none());
// invalid date should error
assert!(serde_json::from_str::<DateWrap>(r#"{"d":"not-a-date"}"#).is_err());
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Assert the parsed Date, not just Some.

is_some() will still pass if the parser starts returning the wrong date value. This helper is simple enough that the test should lock the exact result down.

💡 Suggested assertion upgrade
 mod tests {
     use super::*;
     use serde::Deserialize;
+    use time::Month;
@@
     #[test]
     fn test_parse_opt_date() {
         let ok = serde_json::from_str::<DateWrap>(r#"{"d":"2023-01-02"}"#).unwrap();
-        assert!(ok.d.is_some());
+        assert_eq!(
+            ok.d,
+            Some(Date::from_calendar_date(2023, Month::January, 2).unwrap())
+        );
         let none = serde_json::from_str::<DateWrap>(r#"{"d":null}"#).unwrap();
         assert!(none.d.is_none());
         // invalid date should error
         assert!(serde_json::from_str::<DateWrap>(r#"{"d":"not-a-date"}"#).is_err());
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#[test]
fn test_parse_opt_date() {
let ok = serde_json::from_str::<DateWrap>(r#"{"d":"2023-01-02"}"#).unwrap();
assert!(ok.d.is_some());
let none = serde_json::from_str::<DateWrap>(r#"{"d":null}"#).unwrap();
assert!(none.d.is_none());
// invalid date should error
assert!(serde_json::from_str::<DateWrap>(r#"{"d":"not-a-date"}"#).is_err());
}
#[test]
fn test_parse_opt_date() {
let ok = serde_json::from_str::<DateWrap>(r#"{"d":"2023-01-02"}"#).unwrap();
assert_eq!(
ok.d,
Some(Date::from_calendar_date(2023, Month::January, 2).unwrap())
);
let none = serde_json::from_str::<DateWrap>(r#"{"d":null}"#).unwrap();
assert!(none.d.is_none());
// invalid date should error
assert!(serde_json::from_str::<DateWrap>(r#"{"d":"not-a-date"}"#).is_err());
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_mam/src/serde.rs` around lines 316 - 324, The test test_parse_opt_date
currently only checks ok.d.is_some(); update it to assert the exact parsed date
value for DateWrap.d (e.g., equal to 2023-01-02) so the parser can't silently
return the wrong date; keep the null and invalid-date checks but replace the
is_some() check with an equality assertion comparing ok.d to Some(expected_date)
(construct expected_date with the same Date/NaiveDate type used by your
DateWrap).

Comment on lines 72 to 74
static EDITION_START_REGEX: Lazy<Regex> = Lazy::new(|| {
Regex::new(r"(?i)((\d+(?:st|nd|rd|th)|first|second|third|fifth|sixth|seventh|eight|ninth|tenth|new|revised|updated)\s+(?:[a-z']+\s+)*(?:Edition|ed\.)|(\w+?)\s+(?:Edition|ed\.))").unwrap()
Regex::new(r"(?i)((\d+(?:st|nd|rd|th)|first|second|third|fifth|sixth|seventh|eight|ninth|tenth|new|revised|updated)\s+(?:[a-z']+\s+)*(?:Edition|ed\.|utgåva)|(\w+?)\s+(?:Edition|ed\.|utgåva))").unwrap()
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Restore fourth and eighth in EDITION_START_REGEX.

Line 73 skips fourth and spells eighth as eight, so multi-word prefixes like Fourth Anniversary Edition and Eighth Revised Edition no longer match this branch.

Suggested fix
 static EDITION_START_REGEX: Lazy<Regex> = Lazy::new(|| {
-    Regex::new(r"(?i)((\d+(?:st|nd|rd|th)|first|second|third|fifth|sixth|seventh|eight|ninth|tenth|new|revised|updated)\s+(?:[a-z']+\s+)*(?:Edition|ed\.|utgåva)|(\w+?)\s+(?:Edition|ed\.|utgåva))").unwrap()
+    Regex::new(r"(?i)((\d+(?:st|nd|rd|th)|first|second|third|fourth|fifth|sixth|seventh|eighth|ninth|tenth|new|revised|updated)\s+(?:[a-z']+\s+)*(?:Edition|ed\.|utgåva)|(\w+?)\s+(?:Edition|ed\.|utgåva))").unwrap()
 });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
static EDITION_START_REGEX: Lazy<Regex> = Lazy::new(|| {
Regex::new(r"(?i)((\d+(?:st|nd|rd|th)|first|second|third|fifth|sixth|seventh|eight|ninth|tenth|new|revised|updated)\s+(?:[a-z']+\s+)*(?:Edition|ed\.)|(\w+?)\s+(?:Edition|ed\.))").unwrap()
Regex::new(r"(?i)((\d+(?:st|nd|rd|th)|first|second|third|fifth|sixth|seventh|eight|ninth|tenth|new|revised|updated)\s+(?:[a-z']+\s+)*(?:Edition|ed\.|utgåva)|(\w+?)\s+(?:Edition|ed\.|utgåva))").unwrap()
});
static EDITION_START_REGEX: Lazy<Regex> = Lazy::new(|| {
Regex::new(r"(?i)((\d+(?:st|nd|rd|th)|first|second|third|fourth|fifth|sixth|seventh|eighth|ninth|tenth|new|revised|updated)\s+(?:[a-z']+\s+)*(?:Edition|ed\.|utgåva)|(\w+?)\s+(?:Edition|ed\.|utgåva))").unwrap()
});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_parse/src/lib.rs` around lines 72 - 74, The EDITION_START_REGEX
alternation is missing "fourth" and incorrectly has "eight" instead of "eighth",
causing prefixes like "Fourth Anniversary Edition" and "Eighth Revised Edition"
to fail; update the static EDITION_START_REGEX (the Lazy<Regex> initializer) to
include "fourth" and "eighth" in the word-list alternation (the group currently
holding
"first|second|third|fifth|sixth|seventh|eight|ninth|tenth|new|revised|updated")
so those ordinal words are matched by the regex branch.

Comment on lines +58 to +79
if let Some((edition, _)) = &meta.edition {
dir.set_file_name(
sanitize_filename::sanitize(format!(
"{}, {}",
dir.file_name().unwrap().to_string_lossy(),
edition
))
.to_string(),
);
}
if let Some(narrator) = meta.narrators.first()
&& !exclude_narrator_in_library_dir
{
dir.set_file_name(
sanitize_filename::sanitize(format!(
"{} {{{}}}",
dir.file_name().unwrap().to_string_lossy(),
narrator
))
.to_string(),
);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Potential panic if dir has no file name component.

The unwrap() calls on dir.file_name() at Lines 62 and 74 could panic if dir somehow ends up as an empty path or root. While sanitize_filename should prevent this in practice, consider defensive handling:

Suggested defensive check
     if let Some((edition, _)) = &meta.edition {
+        let Some(file_name) = dir.file_name() else {
+            return None;
+        };
         dir.set_file_name(
             sanitize_filename::sanitize(format!(
                 "{}, {}",
-                dir.file_name().unwrap().to_string_lossy(),
+                file_name.to_string_lossy(),
                 edition
             ))
             .to_string(),
         );
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/common.rs` around lines 58 - 79, The code currently calls
unwrap() on dir.file_name() inside the meta.edition and meta.narrators branches
(see meta.edition, meta.narrators, exclude_narrator_in_library_dir, and
dir.set_file_name using sanitize_filename::sanitize), which can panic if dir has
no file-name component; fix by defensively obtaining the base name via
dir.file_name().map(|s| s.to_string_lossy().into_owned()).unwrap_or_else(||
dir.to_string_lossy().into_owned()) (or another safe fallback like an empty
string) and use that safe base when building the formatted string before calling
dir.set_file_name, ensuring no unwrap() remains.

Comment on lines +174 to +203
#[tokio::test]
async fn test_find_matches() -> Result<()> {
let tmp_dir = std::env::temp_dir().join(format!("mlm_test_duplicates_{}", std::process::id()));
let _ = fs::remove_dir_all(&tmp_dir);
fs::create_dir_all(&tmp_dir)?;
let db_path = tmp_dir.join("test.db");

let db = native_db::Builder::new().create(&mlm_db::MODELS, &db_path)?;
mlm_db::migrate(&db)?;

let t1 = create_test_torrent("1", "My Book", vec!["m4b".to_string()], 100);
let t2 = create_test_torrent("2", "My Book", vec!["mp3".to_string()], 150);
let t3 = create_test_torrent("3", "Other Book", vec!["m4b".to_string()], 100);

{
let rw = db.rw_transaction()?;
rw.insert(t1.clone())?;
rw.insert(t2.clone())?;
rw.insert(t3.clone())?;
rw.commit()?;
}

let matches = find_matches(&db, &t1)?;
assert_eq!(matches.len(), 1);
assert_eq!(matches[0].id, "2");

drop(db);
let _ = fs::remove_dir_all(tmp_dir);
Ok(())
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider using tempfile crate for test directory management.

The current approach with temp_dir().join(format!(...)) works but doesn't guarantee cleanup on panic. The tempfile crate provides TempDir that auto-cleans.

♻️ Suggested improvement
+    use tempfile::TempDir;
+    
     #[tokio::test]
     async fn test_find_matches() -> Result<()> {
-        let tmp_dir = std::env::temp_dir().join(format!("mlm_test_duplicates_{}", std::process::id()));
-        let _ = fs::remove_dir_all(&tmp_dir);
-        fs::create_dir_all(&tmp_dir)?;
-        let db_path = tmp_dir.join("test.db");
+        let tmp_dir = TempDir::new()?;
+        let db_path = tmp_dir.path().join("test.db");
         
         // ... test body ...
         
-        drop(db);
-        let _ = fs::remove_dir_all(tmp_dir);
+        drop(db);
+        // TempDir auto-cleans on drop
         Ok(())
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/duplicates.rs` around lines 174 - 203, The test
test_find_matches currently creates and deletes a tmp_dir manually using
std::env::temp_dir(), fs::create_dir_all and fs::remove_dir_all which can leak
on panic; replace that with tempfile::TempDir (e.g. let tmp =
tempfile::tempdir()?; let db_path = tmp.path().join("test.db");) so the
directory is automatically removed when tmp is dropped; remove the explicit
fs::remove_dir_all(tmp_dir) cleanup and keep TempDir in scope for the duration
of the test so the temporary directory lives until the test ends.

Comment on lines +76 to +77
pub static DISK_PATTERN: Lazy<Regex> =
Lazy::new(|| Regex::new(r"(?:CD|Disc|Disk)\s*(\d+)").unwrap());
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle lowercase disc directory names.

DISK_PATTERN only matches CD|Disc|Disk with that exact casing. Torrents using cd1 or disc 2 get flattened to the bare filename, which can collapse multiple discs into one directory.

Suggested fix
 pub static DISK_PATTERN: Lazy<Regex> =
-    Lazy::new(|| Regex::new(r"(?:CD|Disc|Disk)\s*(\d+)").unwrap());
+    Lazy::new(|| Regex::new(r"(?i)(?:CD|Disc|Disk)\s*(\d+)").unwrap());
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub static DISK_PATTERN: Lazy<Regex> =
Lazy::new(|| Regex::new(r"(?:CD|Disc|Disk)\s*(\d+)").unwrap());
pub static DISK_PATTERN: Lazy<Regex> =
Lazy::new(|| Regex::new(r"(?i)(?:CD|Disc|Disk)\s*(\d+)").unwrap());
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/torrent.rs` around lines 76 - 77, DISK_PATTERN only matches
capitalized "CD|Disc|Disk", so lowercase names like "cd1" are missed; update the
Regex used in the static DISK_PATTERN (created via Lazy and Regex::new) to be
case-insensitive (e.g. add the (?i) flag or equivalent) while preserving the
existing capture group for the disc number so multi-disc directories are
detected correctly.

Comment on lines +129 to +136
for file in files {
if !(selected_audio_format
.as_ref()
.is_some_and(|ext| file.name.ends_with(*ext))
|| selected_ebook_format
.as_ref()
.is_some_and(|ext| file.name.ends_with(*ext)))
{
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Apply selected-format filtering case-insensitively.

select_format lowercases candidate names before picking .m4b/.epub, but this filter compares against file.name verbatim. TRACK.M4B can therefore select a format and still produce zero link plans.

Suggested fix
     let mut plans = vec![];
     for file in files {
+        let name_lower = file.name.to_lowercase();
         if !(selected_audio_format
             .as_ref()
-            .is_some_and(|ext| file.name.ends_with(*ext))
+            .is_some_and(|ext| name_lower.ends_with(ext))
             || selected_ebook_format
                 .as_ref()
-                .is_some_and(|ext| file.name.ends_with(*ext)))
+                .is_some_and(|ext| name_lower.ends_with(ext)))
         {
             continue;
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/torrent.rs` around lines 129 - 136, The filter in the for
loop that checks selected_audio_format and selected_ebook_format uses file.name
verbatim, causing mismatches for uppercase filenames; update the predicate in
the loop (inside the for file in files block) to perform a case-insensitive
comparison by converting the filename to lowercase before calling ends_with
(e.g., use file.name.to_lowercase().ends_with(ext) or otherwise lowercasing once
into a variable), keeping the existing selected_audio_format and
selected_ebook_format checks and symbols unchanged.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
server/templates/pages/torrent_edit.html (2)

32-36: 🧹 Nitpick | 🔵 Trivial

Add selected attribute to the "-- None --" option when language is None.

The "-- None --" option should be explicitly marked as selected when torrent.meta.language is None. While browsers default to the first option when none are selected, explicitly setting selected improves semantic correctness and accessibility.

💡 Suggested improvement
-      <option value="">-- None --</option>
+      <option value="" {% if torrent.meta.language.is_none() %}selected{% endif %}>-- None --</option>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/torrent_edit.html` around lines 32 - 36, The "-- None
--" option lacks an explicit selected attribute when torrent.meta.language is
None; update the first option element so it includes a conditional selected
attribute driven by torrent.meta.language (e.g., add a template conditional that
renders selected when torrent.meta.language == None) to mirror the existing
selected logic used for language options (see Language::all() loop and the {% if
torrent.meta.language == Some(*language) %} condition).

41-48: 🧹 Nitpick | 🔵 Trivial

Add selected attribute to the "-- None --" option when category is None.

Same consideration applies here—explicitly mark the empty option as selected when torrent.meta.cat is None.

💡 Suggested improvement
-      <option value="">-- None --</option>
+      <option value="" {% if torrent.meta.cat.is_none() %}selected{% endif %}>-- None --</option>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/torrent_edit.html` around lines 41 - 48, The empty
option "-- None --" should be marked selected when the torrent has no category;
update the first <option value=""> to include a conditional selected attribute,
e.g. use the same expression style as the others and check torrent.meta.cat for
None (for example: {% if torrent.meta.cat.as_ref().map(OldCategory::as_id) ==
None %}selected{% endif %} or {% if torrent.meta.cat.is_none() %}selected{%
endif %}), so the <option value=""> is selected when torrent.meta.cat is None.
mlm_mam/src/search.rs (1)

372-393: ⚠️ Potential issue | 🔴 Critical

Build failure: Missing tags field in TorrentMeta initializer.

The pipeline fails because TorrentMeta requires a tags field that is not provided. Based on the test fixtures in server/tests/common/mod.rs, TorrentMeta includes tags: vec![].

🔧 Proposed fix
         Ok(clean_meta(
             TorrentMeta {
                 ids,
                 vip_status: Some(vip_status),
                 media_type,
                 main_cat,
                 categories,
+                tags: vec![],
                 cat: Some(cat),
                 language: Some(language),
                 flags: Some(FlagBits::new(self.browseflags)),
                 filetypes,
                 num_files: self.numfiles,
                 size,
                 title: self.title.clone(),
                 edition: None,
                 description: self.description.clone().unwrap_or_default(),
                 authors,
                 narrators,
                 series,
                 source: MetadataSource::Mam,
                 uploaded_at: Some(uploaded_at),
             },
             &self.tags,
         )?)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_mam/src/search.rs` around lines 372 - 393, The TorrentMeta struct
initialization in the clean_meta(...) call is missing the required tags field;
update the TorrentMeta construction inside the function (the TorrentMeta passed
to clean_meta) to include a tags field (e.g., tags: vec![] or the appropriate
tags value taken from the context) so the struct matches the definition used in
tests and compilation; ensure the field name is exactly tags and its type
matches other usages of TorrentMeta.
server/src/config_impl.rs (1)

257-354: ⚠️ Potential issue | 🟠 Major

Handle missing stored subcategories for musicology and radio too.

When meta.cat is None, this branch only rejects or errors for audiobook/ebook filters. A library item with meta.media_type == Musicology or Radio can still return Ok(true) even when categories.musicology / categories.radio is constrained, so those filters silently no-op on stored metadata without a subcategory.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/config_impl.rs` around lines 257 - 354, In matches_meta, when
meta.cat is None add the same rejection and ensure logic for musicology and
radio that already exists for audiobook/ebook: if
categories.musicology.as_ref().is_some_and(|c| c.is_empty()) && meta.media_type
== MediaType::Musicology return Ok(false), and similarly for categories.radio
with MediaType::Radio; also add ensure! checks analogous to the audio/ebook
ensures (e.g. ensure!(self.categories.musicology.as_ref().is_none_or(|c|
c.is_empty()), "has advanced musicology selection and no stored category") and
the same for radio) so stored category constraints for categories.musicology and
categories.radio are honored.
♻️ Duplicate comments (23)
mlm_parse/tests/parse_tests.rs (1)

18-29: 🧹 Nitpick | 🔵 Trivial

Strengthen assertions for complete validation.

Similar to the title normalization test, using starts_with and contains is too loose for reliable regression detection. Consider asserting the exact expected output to catch subtle changes in the name cleaning algorithm.

♻️ Proposed fix
 #[test]
 fn test_clean_name_initials_and_case() {
     let mut name = "JRR TOLKIEN".to_string();
     clean_name(&mut name).unwrap();
-    // JRR should remain as-is (algorithm doesn't split 3-letter initials); TOLKIEN should become Title case
-    assert!(name.starts_with("JRR"));
-    assert!(name.contains("Tolkien"));
+    assert_eq!(name, "JRR Tolkien");
 
     let mut name2 = "john doe".to_string();
     clean_name(&mut name2).unwrap();
-    // short lowercase words should be capitalized at start
-    assert!(name2.contains("John"));
+    assert_eq!(name2, "John Doe");
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_parse/tests/parse_tests.rs` around lines 18 - 29, The test uses loose
assertions; update test_clean_name_initials_and_case to assert exact normalized
outputs instead of starts_with/contains: after clean_name(&mut name) expect name
== "JRR Tolkien" (preserving the 3-letter initials and Title-casing surname) and
after clean_name(&mut name2) expect name2 == "John Doe" (capitalizing short
lowercase words). Replace the two assert! calls for each variable with
assert_eq! comparisons referencing the same variables (name and name2) and the
exact expected strings.
server/templates/pages/torrents.html (1)

229-232: ⚠️ Potential issue | 🟡 Minor

Add rel="noopener noreferrer" to the new-tab links.

The earlier MaM hardening is still missing here, and the new ABS link has the same issue.

Suggested patch
-    {% if let Some(mam_id) = torrent.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank>MaM</a>{% endif %}
+    {% if let Some(mam_id) = torrent.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank rel="noopener noreferrer">MaM</a>{% endif %}
     {% if let (Some(abs_url), Some(abs_id)) = (abs_url.as_ref(), torrent.meta.ids.get(ids::ABS)) %}
-      <a href="{{ abs_url }}/audiobookshelf/item/{{ abs_id }}" target=_blank>ABS</a>
+      <a href="{{ abs_url }}/audiobookshelf/item/{{ abs_id }}" target=_blank rel="noopener noreferrer">ABS</a>
     {% endif %}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/torrents.html` around lines 229 - 232, The new
external links rendered for MaM (using torrent.mam_id) and ABS (using abs_url
and abs_id) open in a new tab but lack rel="noopener noreferrer"; update the
anchor elements that generate those links (the <a
href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank>MaM</a> and the
<a href="{{ abs_url }}/audiobookshelf/item/{{ abs_id }}" target=_blank>ABS</a>
in the torrents.html template) to include rel="noopener noreferrer" alongside
target="_blank" to prevent opener access and noreferrer leaks.
mlm_mam/src/serde.rs (2)

316-324: ⚠️ Potential issue | 🟡 Minor

Assert the parsed date value, not just Some.

is_some() will still pass if the helper starts returning the wrong date.

Suggested patch
     #[test]
     fn test_parse_opt_date() {
         let ok = serde_json::from_str::<DateWrap>(r#"{"d":"2023-01-02"}"#).unwrap();
-        assert!(ok.d.is_some());
+        assert_eq!(
+            ok.d,
+            Some(Date::from_calendar_date(2023, time::Month::January, 2).unwrap())
+        );
         let none = serde_json::from_str::<DateWrap>(r#"{"d":null}"#).unwrap();
         assert!(none.d.is_none());
         // invalid date should error
         assert!(serde_json::from_str::<DateWrap>(r#"{"d":"not-a-date"}"#).is_err());
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_mam/src/serde.rs` around lines 316 - 324, Update the test_parse_opt_date
test to assert the actual parsed Date value instead of only is_some/is_none:
deserialize into DateWrap (struct with field d) and for the ok case assert that
ok.d equals Some(parsed_date) where parsed_date is the expected chrono/Date type
for "2023-01-02" (use the same type/construction used elsewhere in serde.rs),
for the none case assert none.d.is_none(), and keep the invalid-case assertion
that deserialization errors; reference the test function name
test_parse_opt_date and the DateWrap.d field when making these assertions.

203-271: ⚠️ Potential issue | 🟡 Minor

Add coverage for the narrowing-conversion error path.

These tests still miss the branch where deserialization reaches u64 successfully but converting into T fails. Add one overflowing numeric case for num_string_or_number and one non-null overflowing case for opt_num_string_or_number.

Suggested patch
     fn test_num_string_or_number() {
         assert_eq!(
             serde_json::from_str::<NumWrap>(r#"{"val":"42"}"#)
                 .unwrap()
                 .val,
             42
         );
         assert_eq!(
             serde_json::from_str::<NumWrap>(r#"{"val":42}"#)
                 .unwrap()
                 .val,
             42
         );
         assert!(serde_json::from_str::<NumWrap>(r#"{"val":"notanumber"}"#).is_err());
+
+        #[derive(Deserialize)]
+        struct SmallNumWrap {
+            #[serde(deserialize_with = "num_string_or_number")]
+            val: u32,
+        }
+
+        assert!(serde_json::from_str::<SmallNumWrap>(r#"{"val":4294967296}"#).is_err());
     }
@@
     fn test_opt_num_and_string_or_number() {
         assert_eq!(
             serde_json::from_str::<OptNumWrap>(r#"{"val":"100"}"#)
                 .unwrap()
                 .val,
             Some(100)
         );
@@
         assert_eq!(
             serde_json::from_str::<OptStrWrap>(r#"{"val":null}"#)
                 .unwrap()
                 .val,
             None
         );
+
+        #[derive(Deserialize)]
+        struct OptSmallNumWrap {
+            #[serde(deserialize_with = "opt_num_string_or_number")]
+            val: Option<u16>,
+        }
+
+        assert!(serde_json::from_str::<OptSmallNumWrap>(r#"{"val":65536}"#).is_err());
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_mam/src/serde.rs` around lines 203 - 271, Add tests that trigger the
narrowing-conversion failure paths by using a smaller target integer type so
converting a valid u64 value fails: create a local test struct (e.g., NarrowWrap
with #[serde(deserialize_with = "num_string_or_number")] val: u8) and assert
that deserializing {"val":"256"} and/or {"val":256} is_err; similarly add an
optional case (e.g., OptNarrowWrap with #[serde(deserialize_with =
"opt_num_string_or_number")] val: Option<u8>) and assert that deserializing a
non-null overflowing value like {"val":256} (and/or {"val":"256"}) yields an
error instead of Ok(Some(_)).
mlm_db/src/v10.rs (1)

384-390: ⚠️ Potential issue | 🟡 Minor

Remove the wildcard downgrade arm.

_ => Self::Manual will silently misclassify any future v18::MetadataSource variant. Since Mam and Manual are the only known variants here, keep this match exhaustive so the compiler forces an explicit downgrade decision when v18 evolves.

Suggested patch
 impl From<v18::MetadataSource> for MetadataSource {
     fn from(t: v18::MetadataSource) -> Self {
         match t {
             v18::MetadataSource::Mam => Self::Mam,
             v18::MetadataSource::Manual => Self::Manual,
-            _ => Self::Manual,
         }
     }
 }

Based on learnings: In the MLM codebase, migrations only progress forward (upgrades), and newer-to-older conversion behavior in versioned files should be explicitly scoped/documented rather than silently losing information.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_db/src/v10.rs` around lines 384 - 390, The From impl for converting
v18::MetadataSource to MetadataSource currently has a wildcard arm (`_ =>
Self::Manual`) that will silently misclassify future variants; remove the
wildcard arm and make the match explicitly handle only the known
v18::MetadataSource::Mam and v18::MetadataSource::Manual variants (i.e., match
on v18::MetadataSource::Mam => Self::Mam and v18::MetadataSource::Manual =>
Self::Manual) so the match is exhaustive today and the compiler will force you
to update this conversion if v18::MetadataSource gains new variants.
server/src/web/pages/torrent_edit.rs (1)

44-44: ⚠️ Potential issue | 🟠 Major

Remove unused MaM dependency that can block metadata edits.

context.mam()? is called but the result _mam is unused, and line 110 passes None to update_torrent_meta. This call can fail when MaM is unavailable, unnecessarily blocking manual metadata edits.

🔧 Proposed fix
-    let _mam = context.mam()?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/web/pages/torrent_edit.rs` at line 44, The call to context.mam()?
is unused and can fail, blocking metadata edits; remove the unused assignment
(let _mam = context.mam()?) so we no longer attempt to obtain MaM at page render
time, and ensure the code that calls update_torrent_meta continues to pass None
(or the existing optional Mam value) rather than relying on context.mam() here;
update references around context.mam() and the variable _mam so no
unused-binding or fallible call remains.
server/tests/linker_torrent_test.rs (1)

315-321: ⚠️ Potential issue | 🟠 Major

Do not ignore linker errors in this test.

Using let _ = discards the result, so the test passes even if link_torrents_to_library fails. This undermines the test's ability to catch regressions.

🔧 Proposed fix
-    let _ = link_torrents_to_library(
+    link_torrents_to_library(
         config.clone(),
         db.db.clone(),
         (qbit_config, &mock_qbit),
         &mock_mam,
     )
-    .await;
+    .await?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/tests/linker_torrent_test.rs` around lines 315 - 321, The test is
discarding errors by using "let _ =" on the call to link_torrents_to_library, so
failures are hidden; update the test to fail on error by capturing the result
and asserting success (e.g., let res = link_torrents_to_library(...).await;
assert!(res.is_ok()); or use .await.unwrap()), referencing the existing call to
link_torrents_to_library and the config/db/mock_qbit/mock_mam arguments so the
test will surface any returned Err instead of silently ignoring it.
mlm_mam/src/user_torrent.rs (1)

168-172: ⚠️ Potential issue | 🟠 Major

Lossy cast still present - use checked conversion.

The c.id as u8 cast truncates values > 255, silently misclassifying categories. The error at line 170 also loses the original ID when truncation occurs.

🔧 Proposed fix using checked conversion
         let mut categories = self
             .categories
             .iter()
-            .map(|c| {
-                Category::from_id(c.id as u8)
-                    .ok_or_else(|| MetaError::UnknownCat(c.id as u8))
-                    .map(|cat| cat.to_string())
-            })
+            .map(|c| {
+                let id = u8::try_from(c.id)
+                    .map_err(|_| MetaError::UnknownCat(c.id as u8))?;
+                Category::from_id(id)
+                    .ok_or_else(|| MetaError::UnknownCat(id))
+                    .map(|cat| cat.to_string())
+            })
             .collect::<Result<Vec<_>, _>>()?;

Note: Consider extending MetaError::UnknownCat to accept u64 to preserve the original ID in error messages.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_mam/src/user_torrent.rs` around lines 168 - 172, The code is casting c.id
to u8 with `c.id as u8` which can truncate values; change to a checked
conversion (e.g., use `u8::try_from(c.id)` or `TryInto` on `c.id`) and handle
the Result before calling `Category::from_id`, returning `MetaError::UnknownCat`
with the original `c.id` on conversion failure; update the call site that
currently uses `Category::from_id(c.id as u8)` to instead convert safely and
pass the safe u8 into `Category::from_id`, and consider updating
`MetaError::UnknownCat` to accept the original u64 `c.id` so the error preserves
the original ID.
mlm_db/src/v17.rs (1)

548-548: 🧹 Nitpick | 🔵 Trivial

Mark this downgrade-only arm as unreachable.

Given the repo's forward-only migration policy, a bare unimplemented!() here is more ambiguous than helpful. Prefer unreachable!() plus a short reason so accidental execution fails with context instead of a generic panic.

💡 Proposed fix
-            v18::TorrentMetaField::Tags => unimplemented!(),
+            // MLM migrations only go forward; this downgrade shim is not executed.
+            v18::TorrentMetaField::Tags => unreachable!(
+                "v18 -> v17 conversion is downgrade-only and `Tags` has no v17 equivalent"
+            ),
Based on learnings: in the MLM codebase, migrations only progress forward, and newer-to-older conversion branches are only safe to leave unimplemented when they are clearly non-executable and documented.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_db/src/v17.rs` at line 548, The match arm handling
v18::TorrentMetaField::Tags should be marked as unreachable rather than left as
unimplemented; replace the unimplemented!() in the v18::TorrentMetaField::Tags
arm with unreachable!("downgrade-only: forward-only migration policy —
v18::TorrentMetaField::Tags should never be reached") (or similar short message)
so any accidental execution fails with a clear reason; locate the arm in the
conversion/migration function that matches on v18::TorrentMetaField and update
that specific arm for clarity and debugging.
server/src/cleaner.rs (1)

47-52: ⚠️ Potential issue | 🔴 Critical

Guard the empty batch before remove(0).

run_library_cleaner() can still call process_batch() with an empty tail batch. With the current len() == 1 guard, Line 51 panics on batch.remove(0).

💡 Proposed fix
-    if batch.len() == 1 {
+    if batch.len() <= 1 {
         return Ok(());
     };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/cleaner.rs` around lines 47 - 52, The code calls batch.remove(0)
without guarding against an empty batch; update the guard in the function that
contains this snippet (e.g., process_batch / the caller run_library_cleaner) so
that empty batches are handled safely — either return early when
batch.is_empty() or change the existing check to if batch.len() <= 1 { return
Ok(()); } before calling rank_torrents and batch.remove(0), ensuring
rank_torrents and the subsequent batch.remove(0) never operate on an empty
vector.
server/src/lists/mod.rs (1)

171-177: ⚠️ Potential issue | 🟠 Major

Don't treat None == None as the same torrent.

Line 171 still short-circuits when both mam_id values are None, so an unrelated library hit can keep a stale torrent_id/mam_id pair and skip the replacement path. Compare torrent_id first, or only trust mam_id when both sides are Some.

💡 Proposed fix
-        if field.status == TorrentStatus::Existing && field.mam_id == torrent.mam_id {
+        let same_torrent = field
+            .torrent_id
+            .as_ref()
+            .is_some_and(|id| id == &torrent.id)
+            || (field.torrent_id.is_none()
+                && field.mam_id.is_some()
+                && field.mam_id == torrent.mam_id);
+        if field.status == TorrentStatus::Existing && same_torrent {
             return false;
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/lists/mod.rs` around lines 171 - 177, The current equality check
treats None == None for mam_id as a match and skips replacement; update the
matching logic in the block that checks field.status == TorrentStatus::Existing
to avoid treating two Nones as equal by either: (a) prefer comparing torrent_id
first (compare field.torrent_id to torrent.id and return false on equality), or
(b) only compare mam_id when both sides are Some (e.g., if
field.mam_id.is_some() && torrent.mam_id.is_some() && field.mam_id ==
torrent.mam_id then return false); adjust the condition around field, torrent,
TorrentStatus, mam_id and torrent_id accordingly so unrelated entries with None
mam_id don’t short-circuit the replace of ListItemTorrent.
server/src/snatchlist.rs (1)

303-313: ⚠️ Potential issue | 🟠 Major

Preserve mam_id in events when MaM context is unavailable.

At Line 303, mam_id is only sourced from mam_torrent. If that is None, the event loses its MaM association even when torrent.mam_id is present, which can break mam_id-based event views.

💡 Proposed fix
-        let mam_id = mam_torrent.map(|m| m.id);
+        let mam_id = mam_torrent.map(|m| m.id).or(torrent.mam_id);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/snatchlist.rs` around lines 303 - 313, mam_id is currently derived
only from mam_torrent (mam_torrent.map(|m| m.id)) so when mam_torrent is None
but the Torrent still has a mam_id the event loses its MaM association; change
the logic that computes mam_id to fall back to torrent.mam_id when mam_torrent
is None (i.e., compute mam_id by checking mam_torrent first, then using
torrent.mam_id), then pass that mam_id into the call to write_event / Event::new
so events preserve MaM context.
mlm_parse/src/lib.rs (2)

72-74: ⚠️ Potential issue | 🟡 Minor

Restore fourth and eighth in EDITION_START_REGEX.

Line 73 skips fourth and spells eighth as eight, so multi-word prefixes like Fourth Anniversary Edition and Eighth Revised Edition won't match this branch.

Suggested fix
 static EDITION_START_REGEX: Lazy<Regex> = Lazy::new(|| {
-    Regex::new(r"(?i)((\d+(?:st|nd|rd|th)|first|second|third|fifth|sixth|seventh|eight|ninth|tenth|new|revised|updated)\s+(?:[a-z']+\s+)*(?:Edition|ed\.|utgåva)|(\w+?)\s+(?:Edition|ed\.|utgåva))").unwrap()
+    Regex::new(r"(?i)((\d+(?:st|nd|rd|th)|first|second|third|fourth|fifth|sixth|seventh|eighth|ninth|tenth|new|revised|updated)\s+(?:[a-z']+\s+)*(?:Edition|ed\.|utgåva)|(\w+?)\s+(?:Edition|ed\.|utgåva))").unwrap()
 });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_parse/src/lib.rs` around lines 72 - 74, The EDITION_START_REGEX currently
omits "fourth" and misspells "eighth" as "eight", so phrases like "Fourth
Anniversary Edition" or "Eighth Revised Edition" won't match; update the
alternation inside the Lazy<Regex> (the group containing
first|second|third|...|tenth|new|revised|updated) to include "fourth" and
correctly spell "eighth" (e.g.,
first|second|third|fourth|fifth|sixth|seventh|eighth|ninth|tenth|new|revised|updated)
so the regex branch matches those prefixes.

141-147: ⚠️ Potential issue | 🟠 Major

Optional series-number patterns are currently dropped.

At Line 141, series_number uses ? which forces the entire function to return None when series number is absent, even though the return type Option<f32> and regex patterns suggest optional numbers should be supported.

💡 Proposed fix
-        let series_number = captures
-            .get(2)
-            .or(captures.get(4))
-            .or(captures.get(6))
-            .or(captures.get(7))
-            .or(captures.get(10))?;
-        let series_number = match series_number.as_str().to_lowercase().as_str() {
+        let series_number = captures
+            .get(2)
+            .or(captures.get(4))
+            .or(captures.get(6))
+            .or(captures.get(7))
+            .or(captures.get(10))
+            .and_then(|m| {
+                let n = match m.as_str().to_lowercase().as_str() {
+                    "one" | "i" => Some(1.0),
+                    // ... rest of number mappings
+                    other => other.parse::<f32>().ok(),
+                }?;
+                (n >= 0.0).then_some(n)
+            });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_parse/src/lib.rs` around lines 141 - 147, The code uses the try-operator
(?) on the capture chain for series_number which causes the whole function to
return None immediately when that optional capture is absent; instead remove the
trailing ? and handle the Option explicitly (e.g., let series_number_opt =
captures.get(2).or(captures.get(4)).or(captures.get(6)).or(captures.get(7)).or(captures.get(10)).map(|m|
m.as_str().to_lowercase());) then match on series_number_opt (or use
.as_deref()) to parse the string into an f32 only if present and otherwise
return None (or a default) — update the match that references series_number and
the subsequent parsing logic to operate on this Option so optional patterns are
preserved.
server/src/main.rs (1)

616-618: ⚠️ Potential issue | 🟡 Minor

Use the folder-linker context label here.

This still records folder-link failures as link_torrents_to_library, which misclassifies the error path in logs and stats.

Suggested fix
                 let result = link_folders_to_library(config.clone(), db.clone())
                     .await
-                    .context("link_torrents_to_library");
+                    .context("link_folders_to_library");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/main.rs` around lines 616 - 618, The error context string is
wrong: when calling link_folders_to_library(...) you need to change the
.context(...) call to use the folder-linker label (e.g.
"link_folders_to_library" or "folder_linker") instead of
"link_torrents_to_library" so failures are recorded under the correct path;
update the .context(...) on the Result returned by link_folders_to_library to
the appropriate folder-linker label.
server/src/linker/folder.rs (4)

221-222: ⚠️ Potential issue | 🟡 Minor

Count ebook files in num_files too.

size and filetypes already include ebook files, but num_files only counts audio files, so stored metadata underreports mixed-format folders.

Suggested fix
-        num_files: audio_files.len() as u64,
+        num_files: (audio_files.len() + ebook_files.len()) as u64,

Also applies to: 289-290

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/folder.rs` around lines 221 - 222, The num_files field is
only counting audio_files (audio_files.len()) while size and filetypes include
ebooks, so update the logic that builds the struct containing num_files to count
both audio and ebook files; locate the places constructing the struct (the
occurrences around num_files: audio_files.len() and the duplicate at the other
block) and replace the audio-only count with a combined count (e.g.,
audio_files.len() + ebook_files.len()) or a computed total_files variable so
num_files correctly reflects all filetypes alongside Size::from_bytes(size) and
filetypes.

414-435: ⚠️ Potential issue | 🟠 Major

Link ebook files into the library path as well.

This loop only processes audio_files. selected_ebook_format is computed and stored, but the ebook files themselves are never copied/linked or tracked in library_files.

Suggested fix
-        for file in audio_files {
+        for file in audio_files.into_iter().chain(ebook_files.into_iter()) {
             let file_path = PathBuf::from(&file.file_name());
             let library_path = dir.join(&file_path);
             library_files.push(file_path.clone());
             let download_path = file.path();
             match library.options().method {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/folder.rs` around lines 414 - 435, The loop that processes
audio_files (for file in audio_files) never handles ebook files:
selected_ebook_format is computed elsewhere but ebook files are neither
linked/copied nor added to library_files. Update the linking logic to also
iterate over the ebook collection (e.g., ebook_files or however ebooks are
represented) and perform the same branching based on library.options().method
(calling hard_link, copy, or symlink as appropriate) and push the ebook PathBuf
into library_files; ensure you reference the same helpers used for audio
(hard_link, copy, symlink, library_path, download_path, file_path) so ebooks are
tracked and linked the same way as audio files.

63-80: ⚠️ Potential issue | 🟠 Major

Normalize extensions before classifying folder contents.

These checks compare raw path.extension() values against config.audio_types / config.ebook_types verbatim. server/src/linker/common.rs::select_format accepts dotted and mixed-case extensions, so the same config can work for torrent linking but silently miss files here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/folder.rs` around lines 63 - 80, The extension comparisons
in folder classification use raw path.extension() values and can miss matches
when config entries contain dots or different casing; compute a normalized
extension string (e.g.
ext.to_string_lossy().trim_start_matches('.').to_lowercase()) for the entry (use
a local variable in folder.rs before the match) and compare that against
normalized config lists (normalize config.audio_types and config.ebook_types
entries similarly or compare by lowercasing and trimming dots on the fly).
Update the checks that push into metadata_files, audio_files, and ebook_files
(and any other extension checks) to use this normalized_ext, and ensure
consistency with select_format's normalization behavior.

34-37: ⚠️ Potential issue | 🟠 Major

Skip non-directory entries before descending into rip contents.

A regular file under rip_dir will make read_dir(&folder.path()) fail inside link_folder and abort the whole folder-link run.

Suggested fix
             let mut entries = read_dir(&l.rip_dir).await?;
             while let Some(folder) = entries.next_entry().await? {
+                if !folder.file_type().await?.is_dir() {
+                    trace!("Skipping non-directory rip entry: {}", folder.path().display());
+                    continue;
+                }
                 link_folder(&config, library, &db, folder).await?;
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/folder.rs` around lines 34 - 37, When iterating entries
from read_dir(&l.rip_dir) before calling link_folder, skip any non-directory
entries to avoid downstream read_dir failures; for each DirEntry (variable
folder) call its file type check (e.g., file_type().await? or metadata check)
and only await link_folder(&config, library, &db, folder).await? when it is a
directory, otherwise continue the loop (optionally logging or counting skipped
files). Ensure the check references the same symbols used here: entries, folder,
rip_dir, and link_folder.
server/src/config_impl.rs (1)

377-393: ⚠️ Potential issue | 🟠 Major

Honor explicit media-type restrictions in these helpers.

allow_audio / allow_ebook still only inspect edition.categories, so they can return true even when edition.media_type explicitly excludes that format.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/config_impl.rs` around lines 377 - 393, allow_audio and
allow_ebook currently only check edition.categories and can incorrectly return
true when an edition explicitly excludes that media type; update both functions
(allow_audio and allow_ebook) to also inspect g.edition.media_type and treat
explicit media-type exclusions as disallowing that format (i.e., if media_type
is present and does not permit audio/ebook, skip that grab), while still falling
back to the existing categories logic when media_type is None/unspecified.
Ensure you reference g.edition.media_type in the match/condition and combine it
with the current categories check so the helper returns true only when the
edition actually allows that media type.
server/src/autograbber.rs (3)

784-794: ⚠️ Potential issue | 🟡 Minor

Keep mam_id on Updated events when the MaM payload is absent.

This path still drops mam_id whenever mam_torrent is None, even if the metadata already contains the MAM id. That weakens mam_id-indexed event lookup for non-MaM updates.

Suggested fix
-        let mam_id = mam_torrent.map(|m| m.id);
+        let mam_id = mam_torrent.map(|m| m.id).or_else(|| meta.mam_id());
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/autograbber.rs` around lines 784 - 794, The Updated event is
losing mam_id whenever mam_torrent is None even though the metadata may already
contain the MAM id; change the mam_id calculation before write_event so it falls
back to the metadata's MAM id when mam_torrent is None (e.g., compute mam_id =
mam_torrent.map(|m| m.id).or(meta.<mam_id_field>.clone()) and then pass that
mam_id into Event::new/ EventType::Updated), ensuring you reference mam_torrent,
mam_id, meta (the metadata struct), write_event, Event::new and
EventType::Updated to locate and update the code.

734-737: ⚠️ Potential issue | 🟡 Minor

Skip blank owner names when backfilling linker.

Persisting Some("") here blocks a real linker name from being filled later because the downstream check only tests is_none().

Suggested fix
     if linker_is_owner && torrent.linker.is_none()
         && let Some(mam_torrent) = mam_torrent
+        && !mam_torrent.owner_name.trim().is_empty()
     {
         torrent.linker = Some(mam_torrent.owner_name.clone());
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/autograbber.rs` around lines 734 - 737, The current backfill
assigns Some(mam_torrent.owner_name.clone()) even when owner_name is an empty
string, which prevents later fills because the code only checks
torrent.linker.is_none(); update the condition that sets torrent.linker (the
block using linker_is_owner, torrent.linker, mam_torrent and owner_name) to only
assign when mam_torrent.owner_name.trim().is_empty() is false (i.e., ensure the
owner_name is non-empty after trimming) so you skip persisting empty owner names
and allow real linker names to be filled later.

685-689: ⚠️ Potential issue | 🟠 Major

Don't let stored IDs overwrite fresh metadata IDs.

extend applies torrent.meta.ids on top of the newly built meta.ids, so any corrected ID from the current source gets reverted by the stale stored value.

Suggested fix
-    meta.ids.extend(torrent.meta.ids.clone());
+    for (key, value) in torrent.meta.ids.clone() {
+        meta.ids.entry(key).or_insert(value);
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/autograbber.rs` around lines 685 - 689, The current merge uses
meta.ids.extend(torrent.meta.ids.clone()) which lets stale stored IDs from
torrent.meta.ids override or shadow freshly built IDs; instead, merge by adding
only those stored IDs whose identifier keys are not already present in meta.ids
(e.g., filter torrent.meta.ids by absence in meta.ids before extending) so
existing meta.ids values are preserved; keep the meta.tags and meta.description
logic as-is.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@server/src/linker/common.rs`:
- Around line 109-117: The closure in the iterator chain (inside common.rs)
unnecessarily clones the owned String `ext` after calling `to_lowercase()`;
update the map closure so it returns `ext` directly instead of calling
`ext.clone()` (i.e., keep the `let ext = ext.to_lowercase(); if
ext.starts_with('.') { ext } else { format!(".{ext}") }` logic) so you avoid the
redundant allocation while preserving the existing behavior used by the
subsequent `find(|ext| files.iter().any(|f| f.name_lower().ends_with(ext)))`.

In `@server/src/main.rs`:
- Around line 175-194: The current startup loop is draining each *.edition with
mem::take (in the blocks iterating config.autograbs, config.snatchlist,
config.goodreads_lists, config.notion_lists, and config.tags), which clears the
live config and breaks readers like GoodreadsList::allow_audio / allow_ebook
that still access grab.edition; change the logic to copy the edition into
filter.edition instead of mem::take (or unify to a single source of truth) so
that filter.edition is populated but the original autograb.edition /
snatchlist.edition / grab.edition / tag.edition fields remain intact for later
use.

In `@server/src/torrent_downloader.rs`:
- Line 139: The code stores Torrent.mam_id from torrent.meta.ids (via
torrent.meta.mam_id()), which can be missing; instead use the guaranteed
SelectedTorrent.mam_id when constructing the persisted Torrent record. Replace
uses of torrent.meta.mam_id()/looking up ids::MAM with the SelectedTorrent
instance's mam_id (e.g., selected.mam_id or selected_torrent.mam_id) when
building the Torrent passed to the upsert paths (the same change applied in both
places where mam_id is assigned, so TorrentKey::mam_id lookups keep matching).

In `@server/src/web/pages/torrent.rs`:
- Around line 575-580: The code is using meta.mam_id() for the local Torrent
lookup which can fail if as_meta() yields a malformed MAM; instead use the
authoritative mam_torrent.id for the lookup. Change the
r.get().secondary::<Torrent>(TorrentKey::mam_id, meta.mam_id()) call to use
mam_torrent.id (or converted/wrapped to the expected key type) while still
calling mam_torrent.as_meta() to populate meta; i.e., keep as_meta() for meta
extraction but pass mam_torrent.id into TorrentKey::mam_id for the local lookup
so the known result always maps to its Torrent row.

In `@server/templates/pages/index.html`:
- Around line 62-77: The two new linker panels ("Torrent Linker" and "Folder
Linker") contain unclosed <p> tags and their buttons lack an explicit type,
which can break layout and form behavior; update each form in the "Torrent
Linker" and "Folder Linker" sections so the "Last run:" paragraph is properly
closed before the button, ensure the conditional "Result:" paragraph is also
closed after the {% match ... %}{% endmatch %} block, and add type="submit" to
the buttons with name=action value=run_torrent_linker and name=action
value=run_folder_linker so the form submissions behave correctly (refer to the
torrent_linker_result / folder_linker_result match blocks and the buttons in
those forms).

In `@server/templates/pages/replaced.html`:
- Around line 100-102: Add rel="noopener noreferrer" to the external anchor tags
that open in a new tab: update the MaM anchor generated when torrent.mam_id is
present (the tag created in the block referencing torrent.mam_id / mam_id) and
the ABS anchor created when (abs_url, abs_id) exist (the tag using abs_url and
abs_id / torrent.meta.ids.get(ids::ABS)); also apply the same rel attribute to
the other identical anchors referenced later in the template (the second
occurrence around the abs/mam blocks). Ensure both anchors keep target="_blank"
but include rel="noopener noreferrer" to prevent reverse tabnabbing.

In `@server/templates/pages/torrent.html`:
- Around line 82-84: The <button> inside the form (the element with
name="action" and value="relink") is missing an explicit type; update that
button element to include type="submit" so it conforms to HTMLHint and ensures
it submits the form as intended (i.e., change the <button name="action"
value="relink">relink</button> to include type="submit").
- Line 279: The template line rendering the change list item using the variables
field.field, field.from and field.to currently uses a literal "->" which
includes a raw ">" character; update that HTML template fragment (the <li> that
contains {{ field.field }}: {{ field.from }} -> {{ field.to }}) to replace the
greater-than character with the escaped entity &gt; (i.e. change "->" to " -
&gt; " or similar) so the output is valid HTML and passes HTMLHint.

---

Outside diff comments:
In `@mlm_mam/src/search.rs`:
- Around line 372-393: The TorrentMeta struct initialization in the
clean_meta(...) call is missing the required tags field; update the TorrentMeta
construction inside the function (the TorrentMeta passed to clean_meta) to
include a tags field (e.g., tags: vec![] or the appropriate tags value taken
from the context) so the struct matches the definition used in tests and
compilation; ensure the field name is exactly tags and its type matches other
usages of TorrentMeta.

In `@server/src/config_impl.rs`:
- Around line 257-354: In matches_meta, when meta.cat is None add the same
rejection and ensure logic for musicology and radio that already exists for
audiobook/ebook: if categories.musicology.as_ref().is_some_and(|c| c.is_empty())
&& meta.media_type == MediaType::Musicology return Ok(false), and similarly for
categories.radio with MediaType::Radio; also add ensure! checks analogous to the
audio/ebook ensures (e.g.
ensure!(self.categories.musicology.as_ref().is_none_or(|c| c.is_empty()), "has
advanced musicology selection and no stored category") and the same for radio)
so stored category constraints for categories.musicology and categories.radio
are honored.

In `@server/templates/pages/torrent_edit.html`:
- Around line 32-36: The "-- None --" option lacks an explicit selected
attribute when torrent.meta.language is None; update the first option element so
it includes a conditional selected attribute driven by torrent.meta.language
(e.g., add a template conditional that renders selected when
torrent.meta.language == None) to mirror the existing selected logic used for
language options (see Language::all() loop and the {% if torrent.meta.language
== Some(*language) %} condition).
- Around line 41-48: The empty option "-- None --" should be marked selected
when the torrent has no category; update the first <option value=""> to include
a conditional selected attribute, e.g. use the same expression style as the
others and check torrent.meta.cat for None (for example: {% if
torrent.meta.cat.as_ref().map(OldCategory::as_id) == None %}selected{% endif %}
or {% if torrent.meta.cat.is_none() %}selected{% endif %}), so the <option
value=""> is selected when torrent.meta.cat is None.

---

Duplicate comments:
In `@mlm_db/src/v10.rs`:
- Around line 384-390: The From impl for converting v18::MetadataSource to
MetadataSource currently has a wildcard arm (`_ => Self::Manual`) that will
silently misclassify future variants; remove the wildcard arm and make the match
explicitly handle only the known v18::MetadataSource::Mam and
v18::MetadataSource::Manual variants (i.e., match on v18::MetadataSource::Mam =>
Self::Mam and v18::MetadataSource::Manual => Self::Manual) so the match is
exhaustive today and the compiler will force you to update this conversion if
v18::MetadataSource gains new variants.

In `@mlm_db/src/v17.rs`:
- Line 548: The match arm handling v18::TorrentMetaField::Tags should be marked
as unreachable rather than left as unimplemented; replace the unimplemented!()
in the v18::TorrentMetaField::Tags arm with unreachable!("downgrade-only:
forward-only migration policy — v18::TorrentMetaField::Tags should never be
reached") (or similar short message) so any accidental execution fails with a
clear reason; locate the arm in the conversion/migration function that matches
on v18::TorrentMetaField and update that specific arm for clarity and debugging.

In `@mlm_mam/src/serde.rs`:
- Around line 316-324: Update the test_parse_opt_date test to assert the actual
parsed Date value instead of only is_some/is_none: deserialize into DateWrap
(struct with field d) and for the ok case assert that ok.d equals
Some(parsed_date) where parsed_date is the expected chrono/Date type for
"2023-01-02" (use the same type/construction used elsewhere in serde.rs), for
the none case assert none.d.is_none(), and keep the invalid-case assertion that
deserialization errors; reference the test function name test_parse_opt_date and
the DateWrap.d field when making these assertions.
- Around line 203-271: Add tests that trigger the narrowing-conversion failure
paths by using a smaller target integer type so converting a valid u64 value
fails: create a local test struct (e.g., NarrowWrap with
#[serde(deserialize_with = "num_string_or_number")] val: u8) and assert that
deserializing {"val":"256"} and/or {"val":256} is_err; similarly add an optional
case (e.g., OptNarrowWrap with #[serde(deserialize_with =
"opt_num_string_or_number")] val: Option<u8>) and assert that deserializing a
non-null overflowing value like {"val":256} (and/or {"val":"256"}) yields an
error instead of Ok(Some(_)).

In `@mlm_mam/src/user_torrent.rs`:
- Around line 168-172: The code is casting c.id to u8 with `c.id as u8` which
can truncate values; change to a checked conversion (e.g., use
`u8::try_from(c.id)` or `TryInto` on `c.id`) and handle the Result before
calling `Category::from_id`, returning `MetaError::UnknownCat` with the original
`c.id` on conversion failure; update the call site that currently uses
`Category::from_id(c.id as u8)` to instead convert safely and pass the safe u8
into `Category::from_id`, and consider updating `MetaError::UnknownCat` to
accept the original u64 `c.id` so the error preserves the original ID.

In `@mlm_parse/src/lib.rs`:
- Around line 72-74: The EDITION_START_REGEX currently omits "fourth" and
misspells "eighth" as "eight", so phrases like "Fourth Anniversary Edition" or
"Eighth Revised Edition" won't match; update the alternation inside the
Lazy<Regex> (the group containing
first|second|third|...|tenth|new|revised|updated) to include "fourth" and
correctly spell "eighth" (e.g.,
first|second|third|fourth|fifth|sixth|seventh|eighth|ninth|tenth|new|revised|updated)
so the regex branch matches those prefixes.
- Around line 141-147: The code uses the try-operator (?) on the capture chain
for series_number which causes the whole function to return None immediately
when that optional capture is absent; instead remove the trailing ? and handle
the Option explicitly (e.g., let series_number_opt =
captures.get(2).or(captures.get(4)).or(captures.get(6)).or(captures.get(7)).or(captures.get(10)).map(|m|
m.as_str().to_lowercase());) then match on series_number_opt (or use
.as_deref()) to parse the string into an f32 only if present and otherwise
return None (or a default) — update the match that references series_number and
the subsequent parsing logic to operate on this Option so optional patterns are
preserved.

In `@mlm_parse/tests/parse_tests.rs`:
- Around line 18-29: The test uses loose assertions; update
test_clean_name_initials_and_case to assert exact normalized outputs instead of
starts_with/contains: after clean_name(&mut name) expect name == "JRR Tolkien"
(preserving the 3-letter initials and Title-casing surname) and after
clean_name(&mut name2) expect name2 == "John Doe" (capitalizing short lowercase
words). Replace the two assert! calls for each variable with assert_eq!
comparisons referencing the same variables (name and name2) and the exact
expected strings.

In `@server/src/autograbber.rs`:
- Around line 784-794: The Updated event is losing mam_id whenever mam_torrent
is None even though the metadata may already contain the MAM id; change the
mam_id calculation before write_event so it falls back to the metadata's MAM id
when mam_torrent is None (e.g., compute mam_id = mam_torrent.map(|m|
m.id).or(meta.<mam_id_field>.clone()) and then pass that mam_id into Event::new/
EventType::Updated), ensuring you reference mam_torrent, mam_id, meta (the
metadata struct), write_event, Event::new and EventType::Updated to locate and
update the code.
- Around line 734-737: The current backfill assigns
Some(mam_torrent.owner_name.clone()) even when owner_name is an empty string,
which prevents later fills because the code only checks
torrent.linker.is_none(); update the condition that sets torrent.linker (the
block using linker_is_owner, torrent.linker, mam_torrent and owner_name) to only
assign when mam_torrent.owner_name.trim().is_empty() is false (i.e., ensure the
owner_name is non-empty after trimming) so you skip persisting empty owner names
and allow real linker names to be filled later.
- Around line 685-689: The current merge uses
meta.ids.extend(torrent.meta.ids.clone()) which lets stale stored IDs from
torrent.meta.ids override or shadow freshly built IDs; instead, merge by adding
only those stored IDs whose identifier keys are not already present in meta.ids
(e.g., filter torrent.meta.ids by absence in meta.ids before extending) so
existing meta.ids values are preserved; keep the meta.tags and meta.description
logic as-is.

In `@server/src/cleaner.rs`:
- Around line 47-52: The code calls batch.remove(0) without guarding against an
empty batch; update the guard in the function that contains this snippet (e.g.,
process_batch / the caller run_library_cleaner) so that empty batches are
handled safely — either return early when batch.is_empty() or change the
existing check to if batch.len() <= 1 { return Ok(()); } before calling
rank_torrents and batch.remove(0), ensuring rank_torrents and the subsequent
batch.remove(0) never operate on an empty vector.

In `@server/src/config_impl.rs`:
- Around line 377-393: allow_audio and allow_ebook currently only check
edition.categories and can incorrectly return true when an edition explicitly
excludes that media type; update both functions (allow_audio and allow_ebook) to
also inspect g.edition.media_type and treat explicit media-type exclusions as
disallowing that format (i.e., if media_type is present and does not permit
audio/ebook, skip that grab), while still falling back to the existing
categories logic when media_type is None/unspecified. Ensure you reference
g.edition.media_type in the match/condition and combine it with the current
categories check so the helper returns true only when the edition actually
allows that media type.

In `@server/src/linker/folder.rs`:
- Around line 221-222: The num_files field is only counting audio_files
(audio_files.len()) while size and filetypes include ebooks, so update the logic
that builds the struct containing num_files to count both audio and ebook files;
locate the places constructing the struct (the occurrences around num_files:
audio_files.len() and the duplicate at the other block) and replace the
audio-only count with a combined count (e.g., audio_files.len() +
ebook_files.len()) or a computed total_files variable so num_files correctly
reflects all filetypes alongside Size::from_bytes(size) and filetypes.
- Around line 414-435: The loop that processes audio_files (for file in
audio_files) never handles ebook files: selected_ebook_format is computed
elsewhere but ebook files are neither linked/copied nor added to library_files.
Update the linking logic to also iterate over the ebook collection (e.g.,
ebook_files or however ebooks are represented) and perform the same branching
based on library.options().method (calling hard_link, copy, or symlink as
appropriate) and push the ebook PathBuf into library_files; ensure you reference
the same helpers used for audio (hard_link, copy, symlink, library_path,
download_path, file_path) so ebooks are tracked and linked the same way as audio
files.
- Around line 63-80: The extension comparisons in folder classification use raw
path.extension() values and can miss matches when config entries contain dots or
different casing; compute a normalized extension string (e.g.
ext.to_string_lossy().trim_start_matches('.').to_lowercase()) for the entry (use
a local variable in folder.rs before the match) and compare that against
normalized config lists (normalize config.audio_types and config.ebook_types
entries similarly or compare by lowercasing and trimming dots on the fly).
Update the checks that push into metadata_files, audio_files, and ebook_files
(and any other extension checks) to use this normalized_ext, and ensure
consistency with select_format's normalization behavior.
- Around line 34-37: When iterating entries from read_dir(&l.rip_dir) before
calling link_folder, skip any non-directory entries to avoid downstream read_dir
failures; for each DirEntry (variable folder) call its file type check (e.g.,
file_type().await? or metadata check) and only await link_folder(&config,
library, &db, folder).await? when it is a directory, otherwise continue the loop
(optionally logging or counting skipped files). Ensure the check references the
same symbols used here: entries, folder, rip_dir, and link_folder.

In `@server/src/lists/mod.rs`:
- Around line 171-177: The current equality check treats None == None for mam_id
as a match and skips replacement; update the matching logic in the block that
checks field.status == TorrentStatus::Existing to avoid treating two Nones as
equal by either: (a) prefer comparing torrent_id first (compare field.torrent_id
to torrent.id and return false on equality), or (b) only compare mam_id when
both sides are Some (e.g., if field.mam_id.is_some() && torrent.mam_id.is_some()
&& field.mam_id == torrent.mam_id then return false); adjust the condition
around field, torrent, TorrentStatus, mam_id and torrent_id accordingly so
unrelated entries with None mam_id don’t short-circuit the replace of
ListItemTorrent.

In `@server/src/main.rs`:
- Around line 616-618: The error context string is wrong: when calling
link_folders_to_library(...) you need to change the .context(...) call to use
the folder-linker label (e.g. "link_folders_to_library" or "folder_linker")
instead of "link_torrents_to_library" so failures are recorded under the correct
path; update the .context(...) on the Result returned by link_folders_to_library
to the appropriate folder-linker label.

In `@server/src/snatchlist.rs`:
- Around line 303-313: mam_id is currently derived only from mam_torrent
(mam_torrent.map(|m| m.id)) so when mam_torrent is None but the Torrent still
has a mam_id the event loses its MaM association; change the logic that computes
mam_id to fall back to torrent.mam_id when mam_torrent is None (i.e., compute
mam_id by checking mam_torrent first, then using torrent.mam_id), then pass that
mam_id into the call to write_event / Event::new so events preserve MaM context.

In `@server/src/web/pages/torrent_edit.rs`:
- Line 44: The call to context.mam()? is unused and can fail, blocking metadata
edits; remove the unused assignment (let _mam = context.mam()?) so we no longer
attempt to obtain MaM at page render time, and ensure the code that calls
update_torrent_meta continues to pass None (or the existing optional Mam value)
rather than relying on context.mam() here; update references around
context.mam() and the variable _mam so no unused-binding or fallible call
remains.

In `@server/templates/pages/torrents.html`:
- Around line 229-232: The new external links rendered for MaM (using
torrent.mam_id) and ABS (using abs_url and abs_id) open in a new tab but lack
rel="noopener noreferrer"; update the anchor elements that generate those links
(the <a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank>MaM</a>
and the <a href="{{ abs_url }}/audiobookshelf/item/{{ abs_id }}"
target=_blank>ABS</a> in the torrents.html template) to include rel="noopener
noreferrer" alongside target="_blank" to prevent opener access and noreferrer
leaks.

In `@server/tests/linker_torrent_test.rs`:
- Around line 315-321: The test is discarding errors by using "let _ =" on the
call to link_torrents_to_library, so failures are hidden; update the test to
fail on error by capturing the result and asserting success (e.g., let res =
link_torrents_to_library(...).await; assert!(res.is_ok()); or use
.await.unwrap()), referencing the existing call to link_torrents_to_library and
the config/db/mock_qbit/mock_mam arguments so the test will surface any returned
Err instead of silently ignoring it.

Comment on lines +109 to +117
.map(|ext| {
let ext = ext.to_lowercase();
if ext.starts_with('.') {
ext.clone()
} else {
format!(".{ext}")
}
})
.find(|ext| files.iter().any(|f| f.name_lower().ends_with(ext)))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Redundant clone on line 112.

ext is already an owned String after to_lowercase(), so ext.clone() is unnecessary.

♻️ Suggested fix
         .map(|ext| {
             let ext = ext.to_lowercase();
             if ext.starts_with('.') {
-                ext.clone()
+                ext
             } else {
                 format!(".{ext}")
             }
         })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
.map(|ext| {
let ext = ext.to_lowercase();
if ext.starts_with('.') {
ext.clone()
} else {
format!(".{ext}")
}
})
.find(|ext| files.iter().any(|f| f.name_lower().ends_with(ext)))
.map(|ext| {
let ext = ext.to_lowercase();
if ext.starts_with('.') {
ext
} else {
format!(".{ext}")
}
})
.find(|ext| files.iter().any(|f| f.name_lower().ends_with(ext)))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/common.rs` around lines 109 - 117, The closure in the
iterator chain (inside common.rs) unnecessarily clones the owned String `ext`
after calling `to_lowercase()`; update the map closure so it returns `ext`
directly instead of calling `ext.clone()` (i.e., keep the `let ext =
ext.to_lowercase(); if ext.starts_with('.') { ext } else { format!(".{ext}") }`
logic) so you avoid the redundant allocation while preserving the existing
behavior used by the subsequent `find(|ext| files.iter().any(|f|
f.name_lower().ends_with(ext)))`.

mam_id: torrent.meta.mam_id,
abs_id: None,
goodreads_id: torrent.goodreads_id,
mam_id: torrent.meta.mam_id(),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Store the selected torrent's MaM ID, not the optional parsed copy.

Both upserts already have a guaranteed SelectedTorrent.mam_id, but Lines 139 and 256 derive the stored Torrent.mam_id from torrent.meta.ids. If that map is stale or missing ids::MAM, the download is persisted with mam_id = None, and later TorrentKey::mam_id lookups stop matching an otherwise valid torrent.

💡 Proposed fix
-                    mam_id: torrent.meta.mam_id(),
+                    mam_id: Some(torrent.mam_id),
...
-            mam_id: torrent.meta.mam_id(),
+            mam_id: Some(torrent.mam_id),

Also applies to: 256-256

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/torrent_downloader.rs` at line 139, The code stores Torrent.mam_id
from torrent.meta.ids (via torrent.meta.mam_id()), which can be missing; instead
use the guaranteed SelectedTorrent.mam_id when constructing the persisted
Torrent record. Replace uses of torrent.meta.mam_id()/looking up ids::MAM with
the SelectedTorrent instance's mam_id (e.g., selected.mam_id or
selected_torrent.mam_id) when building the Torrent passed to the upsert paths
(the same change applied in both places where mam_id is assigned, so
TorrentKey::mam_id lookups keep matching).

Comment on lines +575 to +580
.filter(|t| Some(t.id) != meta.mam_id())
.map(|mam_torrent| {
let meta = mam_torrent.as_meta()?;
let torrent = r
.get()
.secondary::<Torrent>(TorrentKey::mam_id, meta.mam_id)?;
.secondary::<Torrent>(TorrentKey::mam_id, meta.mam_id())?;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Use mam_torrent.id directly for the local lookup.

The MaM result already carries the authoritative ID, but Line 580 reparses it through meta.mam_id(). If as_meta() ever yields a missing or invalid ids::MAM, this page stops associating a known result with its local Torrent row.

💡 Proposed fix
-        .filter(|t| Some(t.id) != meta.mam_id())
+        .filter(|t| meta.mam_id() != Some(t.id))
         .map(|mam_torrent| {
             let meta = mam_torrent.as_meta()?;
             let torrent = r
                 .get()
-                .secondary::<Torrent>(TorrentKey::mam_id, meta.mam_id())?;
+                .secondary::<Torrent>(TorrentKey::mam_id, mam_torrent.id)?;
             let selected_torrent = r.get().primary(mam_torrent.id)?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/web/pages/torrent.rs` around lines 575 - 580, The code is using
meta.mam_id() for the local Torrent lookup which can fail if as_meta() yields a
malformed MAM; instead use the authoritative mam_torrent.id for the lookup.
Change the r.get().secondary::<Torrent>(TorrentKey::mam_id, meta.mam_id()) call
to use mam_torrent.id (or converted/wrapped to the expected key type) while
still calling mam_torrent.as_meta() to populate meta; i.e., keep as_meta() for
meta extraction but pass mam_torrent.id into TorrentKey::mam_id for the local
lookup so the known result always maps to its Torrent row.

Comment on lines 62 to 77
<form method=post class="infobox" inline>
<h2>Linker</h2>
<p>Last run: {% match linker_run_at %}{% when Some(run_at) %}{{ self::time(run_at) }}{% when None %}never{% endmatch %}
<button name=action value=run_linker>run now</button>
{% if linker_run_at.is_some() %}
<p>Result: {% match linker_result %}{% when Some(Ok(())) %}success{% when Some(Err(err)) %}{{ err }}{% when None %}running{% endmatch %}
<h2>Torrent Linker</h2>
<p>Last run: {% match torrent_linker_run_at %}{% when Some(run_at) %}{{ self::time(run_at) }}{% when None %}never{% endmatch %}
<button name=action value=run_torrent_linker>run now</button>
{% if torrent_linker_run_at.is_some() %}
<p>Result: {% match torrent_linker_result %}{% when Some(Ok(())) %}success{% when Some(Err(err)) %}{{ err }}{% when None %}running{% endmatch %}
{% endif %}
</form>

<form method=post class="infobox" inline>
<h2>Folder Linker</h2>
<p>Last run: {% match folder_linker_run_at %}{% when Some(run_at) %}{{ self::time(run_at) }}{% when None %}never{% endmatch %}
<button name=action value=run_folder_linker>run now</button>
{% if folder_linker_run_at.is_some() %}
<p>Result: {% match folder_linker_result %}{% when Some(Ok(())) %}success{% when Some(Err(err)) %}{{ err }}{% when None %}running{% endmatch %}
{% endif %}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix the invalid markup in the new linker cards.

Both new forms open a Result: paragraph without closing it, and the new action buttons should declare type="submit" explicitly.

Suggested patch
 <form method=post class="infobox" inline>
   <h2>Torrent Linker</h2>
   <p>Last run: {% match torrent_linker_run_at %}{% when Some(run_at) %}{{ self::time(run_at) }}{% when None %}never{% endmatch %}
-  <button name=action value=run_torrent_linker>run now</button>
+  <button type=submit name=action value=run_torrent_linker>run now</button>
   {% if torrent_linker_run_at.is_some() %}
-  <p>Result: {% match torrent_linker_result %}{% when Some(Ok(())) %}success{% when Some(Err(err)) %}{{ err }}{% when None %}running{% endmatch %}
+  <p>Result: {% match torrent_linker_result %}{% when Some(Ok(())) %}success{% when Some(Err(err)) %}{{ err }}{% when None %}running{% endmatch %}</p>
   {% endif %}
 </form>
 
 <form method=post class="infobox" inline>
   <h2>Folder Linker</h2>
   <p>Last run: {% match folder_linker_run_at %}{% when Some(run_at) %}{{ self::time(run_at) }}{% when None %}never{% endmatch %}
-  <button name=action value=run_folder_linker>run now</button>
+  <button type=submit name=action value=run_folder_linker>run now</button>
   {% if folder_linker_run_at.is_some() %}
-  <p>Result: {% match folder_linker_result %}{% when Some(Ok(())) %}success{% when Some(Err(err)) %}{{ err }}{% when None %}running{% endmatch %}
+  <p>Result: {% match folder_linker_result %}{% when Some(Ok(())) %}success{% when Some(Err(err)) %}{{ err }}{% when None %}running{% endmatch %}</p>
   {% endif %}
 </form>
🧰 Tools
🪛 HTMLHint (1.9.2)

[warning] 65-65: The type attribute must be present on elements.

(button-type-require)


[error] 67-67: Tag must be paired, missing: [

], start tag match failed [

] on line 67.

(tag-pair)


[warning] 74-74: The type attribute must be present on

elements.

(button-type-require)


[error] 76-76: Tag must be paired, missing: [

], start tag match failed [

] on line 76.

(tag-pair)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/index.html` around lines 62 - 77, The two new linker
panels ("Torrent Linker" and "Folder Linker") contain unclosed <p> tags and
their buttons lack an explicit type, which can break layout and form behavior;
update each form in the "Torrent Linker" and "Folder Linker" sections so the
"Last run:" paragraph is properly closed before the button, ensure the
conditional "Result:" paragraph is also closed after the {% match ... %}{%
endmatch %} block, and add type="submit" to the buttons with name=action
value=run_torrent_linker and name=action value=run_folder_linker so the form
submissions behave correctly (refer to the torrent_linker_result /
folder_linker_result match blocks and the buttons in those forms).

Comment on lines +100 to 102
{% if let Some(mam_id) = torrent.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank>MaM</a>{% endif %}
{% if let (Some(abs_url), Some(abs_id)) = (abs_url.as_ref(), torrent.meta.ids.get(ids::ABS)) %}
<a href="{{ abs_url }}/audiobookshelf/item/{{ abs_id }}" target=_blank>ABS</a>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Harden the new external links opened with target=_blank.

Both the MaM and ABS anchors should add rel="noopener noreferrer" now that these blocks are being touched.

Suggested patch
-    {% if let Some(mam_id) = torrent.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank>MaM</a>{% endif %}
+    {% if let Some(mam_id) = torrent.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank rel="noopener noreferrer">MaM</a>{% endif %}
     {% if let (Some(abs_url), Some(abs_id)) = (abs_url.as_ref(), torrent.meta.ids.get(ids::ABS)) %}
-      <a href="{{ abs_url }}/audiobookshelf/item/{{ abs_id }}" target=_blank>ABS</a>
+      <a href="{{ abs_url }}/audiobookshelf/item/{{ abs_id }}" target=_blank rel="noopener noreferrer">ABS</a>
     {% endif %}
@@
-    {% if let Some(mam_id) = replacement.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank>MaM</a>{% endif %}
+    {% if let Some(mam_id) = replacement.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank rel="noopener noreferrer">MaM</a>{% endif %}
     {% if let (Some(abs_url), Some(abs_id)) = (abs_url.as_ref(), replacement.meta.ids.get(ids::ABS)) %}
-      <a href="{{ abs_url }}/audiobookshelf/item/{{ abs_id }}" target=_blank>ABS</a>
+      <a href="{{ abs_url }}/audiobookshelf/item/{{ abs_id }}" target=_blank rel="noopener noreferrer">ABS</a>
     {% endif %}

Also applies to: 133-135

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/replaced.html` around lines 100 - 102, Add
rel="noopener noreferrer" to the external anchor tags that open in a new tab:
update the MaM anchor generated when torrent.mam_id is present (the tag created
in the block referencing torrent.mam_id / mam_id) and the ABS anchor created
when (abs_url, abs_id) exist (the tag using abs_url and abs_id /
torrent.meta.ids.get(ids::ABS)); also apply the same rel attribute to the other
identical anchors referenced later in the template (the second occurrence around
the abs/mam blocks). Ensure both anchors keep target="_blank" but include
rel="noopener noreferrer" to prevent reverse tabnabbing.

Comment on lines +82 to +84
<form method=post>
<button name=action value=relink>relink</button>
</form>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add type attribute to the button element.

Per HTMLHint, <button> elements should have an explicit type attribute. Since this is in a form, use type="submit".

🔧 Proposed fix
       <form method=post>
-        <button name=action value=relink>relink</button>
+        <button type=submit name=action value=relink>relink</button>
       </form>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<form method=post>
<button name=action value=relink>relink</button>
</form>
<form method=post>
<button type=submit name=action value=relink>relink</button>
</form>
🧰 Tools
🪛 HTMLHint (1.9.2)

[warning] 83-83: The type attribute must be present on elements.

(button-type-require)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/torrent.html` around lines 82 - 84, The <button>
inside the form (the element with name="action" and value="relink") is missing
an explicit type; update that button element to include type="submit" so it
conforms to HTMLHint and ensures it submits the form as intended (i.e., change
the <button name="action" value="relink">relink</button> to include
type="submit").

Updated Torrent from {{ source.0 }} {{ source.1 }}<br />
<ul>
{% for field in fields %}
<li>{{ field.field }}: {{ field.from }} -> {{ field.to }}</li>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Escape the > character in HTML output.

HTMLHint flags that special characters like > should be escaped in HTML content. Use &gt; instead of -> to produce valid HTML.

🔧 Proposed fix
-        <li>{{ field.field }}: {{ field.from }} -> {{ field.to }}</li>
+        <li>{{ field.field }}: {{ field.from }} &gt; {{ field.to }}</li>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<li>{{ field.field }}: {{ field.from }} -> {{ field.to }}</li>
<li>{{ field.field }}: {{ field.from }} -&gt; {{ field.to }}</li>
🧰 Tools
🪛 HTMLHint (1.9.2)

[error] 279-279: Special characters must be escaped : [ > ].

(spec-char-escape)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/torrent.html` at line 279, The template line rendering
the change list item using the variables field.field, field.from and field.to
currently uses a literal "->" which includes a raw ">" character; update that
HTML template fragment (the <li> that contains {{ field.field }}: {{ field.from
}} -> {{ field.to }}) to replace the greater-than character with the escaped
entity &gt; (i.e. change "->" to " - &gt; " or similar) so the output is valid
HTML and passes HTMLHint.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

♻️ Duplicate comments (4)
server/templates/pages/torrents.html (1)

234-237: 🧹 Nitpick | 🔵 Trivial

Add rel="noopener noreferrer" to external links.

Both the MaM link (Line 234) and ABS link (Line 236) use target=_blank without a rel attribute, exposing the application to window.opener attacks.

Suggested fix
-    {% if let Some(mam_id) = torrent.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank>MaM</a>{% endif %}
+    {% if let Some(mam_id) = torrent.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank rel="noopener noreferrer">MaM</a>{% endif %}
     {% if let (Some(abs_url), Some(abs_id)) = (abs_url.as_ref(), torrent.meta.ids.get(ids::ABS)) %}
-      <a href="{{ abs_url }}/audiobookshelf/item/{{ abs_id }}" target=_blank>ABS</a>
+      <a href="{{ abs_url }}/audiobookshelf/item/{{ abs_id }}" target=_blank rel="noopener noreferrer">ABS</a>
     {% endif %}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/torrents.html` around lines 234 - 237, The external
anchor elements rendering MaM and ABS links (the tag using torrent.mam_id /
mam_id and the tag using abs_url, abs_id from abs_url.as_ref() and
torrent.meta.ids.get(ids::ABS)) open in a new tab with target=_blank but lack a
rel attribute; update both anchor tags to include rel="noopener noreferrer"
alongside target=_blank to mitigate window.opener attacks.
server/src/linker/mod.rs (1)

1-9: 🧹 Nitpick | 🔵 Trivial

Consider re-exporting commonly used items from folder and libation_cats modules.

The folder module declares public items (e.g., link_folders_to_library, Libation) and libation_cats has public items (MappingDepth, three_plus_override_candidates) that are used by other parts of the codebase (e.g., libation_unmapped_categories.rs), but unlike common, duplicates, and torrent, nothing is re-exported from them.

For API consistency and discoverability, consider adding re-exports for commonly used public items:

 pub use self::common::{copy, file_size, hard_link, library_dir, map_path, select_format, symlink};
 pub use self::duplicates::{find_matches, rank_torrents};
 pub use self::torrent::{find_library, refresh_mam_metadata, refresh_metadata_relink, relink};
+pub use self::folder::link_folders_to_library;
+pub use self::libation_cats::{MappingDepth, three_plus_override_candidates};
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/mod.rs` around lines 1 - 9, The mod does not re-export
commonly used symbols from folder and libation_cats, reducing discoverability;
add re-exports for the public items used elsewhere by updating mod.rs to
publicly re-export folder::link_folders_to_library and folder::Libation (or
other commonly used folder symbols) and libation_cats::MappingDepth and
libation_cats::three_plus_override_candidates so callers can import them from
linker::... the same way duplicates and torrent symbols are re-exported;
reference the module exports via the symbols link_folders_to_library, Libation,
MappingDepth, and three_plus_override_candidates when adding the pub use
statements.
server/src/web/pages/torrent.rs (1)

579-584: 🧹 Nitpick | 🔵 Trivial

Consider using mam_torrent.id directly for the secondary lookup.

Line 584 uses meta.mam_id() for the secondary lookup, but meta is derived from mam_torrent.as_meta(). If as_meta() produces a TorrentMeta where the ids map is malformed or missing the MAM entry, the lookup will fail even though mam_torrent.id is known and valid.

Using the authoritative mam_torrent.id directly would be more robust:

Suggested fix
         .map(|mam_torrent| {
             let meta = mam_torrent.as_meta()?;
             let torrent = r
                 .get()
-                .secondary::<Torrent>(TorrentKey::mam_id, meta.mam_id())?;
+                .secondary::<Torrent>(TorrentKey::mam_id, Some(mam_torrent.id))?;
             let selected_torrent = r.get().primary(mam_torrent.id)?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/web/pages/torrent.rs` around lines 579 - 584, The secondary DB
lookup should use the authoritative mam_torrent.id rather than meta.mam_id(): in
the closure that maps mam_torrent (where you call mam_torrent.as_meta() -> meta
and then r.get().secondary::<Torrent>(TorrentKey::mam_id, meta.mam_id())),
change the secondary key argument to use mam_torrent.id so the lookup does not
depend on potentially-malformed meta.ids; keep as_meta() if you still need meta
elsewhere but pass mam_torrent.id to
r.get().secondary::<Torrent>(TorrentKey::mam_id, mam_torrent.id).
server/templates/pages/torrent.html (1)

281-281: ⚠️ Potential issue | 🟡 Minor

Escape the > character in HTML output.

HTMLHint flags that special characters like > should be escaped in HTML content. Use &gt; or an arrow entity/symbol.

Suggested fix
-        <li>{{ field.field }}: {{ field.from }} -> {{ field.to }}</li>
+        <li>{{ field.field }}: {{ field.from }} → {{ field.to }}</li>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/torrent.html` at line 281, Replace the raw "->" arrow
in the HTML template with an escaped entity to avoid unescaped special
characters; update the line that renders field.field, field.from and field.to
(the template containing {{ field.field }}, {{ field.from }}, {{ field.to }} and
the literal "->") to use a safe HTML entity such as &gt; or &rarr; instead of
the raw ">"/dash arrow so the output becomes escaped in the rendered <li>.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@mlm_db/src/impls/categories.rs`:
- Around line 838-1003: The current Category::from_str (impl FromStr for
Category) repeatedly iterates ~100 enum variants comparing
Category::normalize(category.as_str()) which is O(n) per parse; replace this
with a static LazyLock<HashMap<String, Category>> built once (key =
Category::normalize(category.as_str()), value = category) and use map.get(&key)
to return Ok(cloned_category) or fall back to Category::from_legacy_label(value)
as before; ensure the static is initialized with every variant used in the loop
and that FromStr still returns Err = String when neither the map nor
from_legacy_label succeed.

In `@mlm_db/src/impls/meta.rs`:
- Around line 42-47: The diff currently records ids using debug formatting
("{:?}") which yields a compact map representation; change TorrentMetaDiff
creation for TorrentMetaField::Ids so the from and to strings are a more
readable key-value list (e.g., sort entries of self.ids and other.ids and join
as "key: value" lines or use a pretty-printed JSON string) before assigning to
from/to; update the block that compares self.ids and other.ids (referencing ids,
TorrentMetaDiff, TorrentMetaField::Ids, self.ids, other.ids) to produce the
formatted strings for clearer diffs.

In `@mlm_db/src/v18.rs`:
- Around line 579-609: The conversion implementation From<v17::TorrentMeta> for
TorrentMeta is re-parsing edition from the title via parse_edition(&t.title, "")
and discarding the already-populated t.edition; change the mapping to preserve
the existing parsed edition by using t.edition (and only fall back to parsing if
t.edition is empty/null if desired). Update the tuple assignment that currently
does let (title, edition) = parse_edition(&t.title, ""); to instead obtain title
from t.title and edition from t.edition (or conditionally call parse_edition
only when t.edition is empty), leaving all other fields in the constructor
(e.g., ids, vip_status, categories, tags, authors, uploaded_at) unchanged.

In `@mlm_mam/src/user_torrent.rs`:
- Around line 222-223: Move or update the misleading TODO so it sits next to the
num_files initialization rather than above description: find the struct/instance
initialization where num_files and description are set (symbols: num_files,
description in user_torrent.rs) and either relocate the comment to directly
precede num_files or change the comment text to accurately describe description;
ensure the TODO correctly references num_files if it remains about that field,
or remove/adjust it if it pertains to description.

In `@server/src/bin/libation_unmapped_categories.rs`:
- Around line 70-82: collect_json_files currently descends into directories
without checking for symbolic links, risking infinite recursion on symlink
cycles; update collect_json_files to skip symlinks by using fs::symlink_metadata
(or path.is_symlink()) and test the file_type().is_symlink() before recursing,
so directories where metadata indicates a symlink are not traversed, and ensure
errors from symlink_metadata are handled with context similar to existing
read_dir calls.

In `@server/templates/pages/torrent_mam.html`:
- Around line 23-24: The external anchor that opens MaM in a new tab (the <a>
element with href "https://www.myanonamouse.net/t/{{ mam_torrent.id }}" and
class "btn") uses target=_blank but lacks a rel attribute, so update that anchor
to include rel="noopener noreferrer" to prevent window.opener attacks and
referrer leakage while preserving target=_blank behavior.

In `@server/tests/linker_folder_test.rs`:
- Line 354: The call to link_folders_to_library in the test is passing an extra
third argument &Events::new(); remove the third argument so the call is
link_folders_to_library(config.clone(), test_db.db.clone()). Update the
invocation at the location that uses link_folders_to_library(...) so it matches
the function signature (config, db) used elsewhere in the file and in the other
test calls.

---

Duplicate comments:
In `@server/src/linker/mod.rs`:
- Around line 1-9: The mod does not re-export commonly used symbols from folder
and libation_cats, reducing discoverability; add re-exports for the public items
used elsewhere by updating mod.rs to publicly re-export
folder::link_folders_to_library and folder::Libation (or other commonly used
folder symbols) and libation_cats::MappingDepth and
libation_cats::three_plus_override_candidates so callers can import them from
linker::... the same way duplicates and torrent symbols are re-exported;
reference the module exports via the symbols link_folders_to_library, Libation,
MappingDepth, and three_plus_override_candidates when adding the pub use
statements.

In `@server/src/web/pages/torrent.rs`:
- Around line 579-584: The secondary DB lookup should use the authoritative
mam_torrent.id rather than meta.mam_id(): in the closure that maps mam_torrent
(where you call mam_torrent.as_meta() -> meta and then
r.get().secondary::<Torrent>(TorrentKey::mam_id, meta.mam_id())), change the
secondary key argument to use mam_torrent.id so the lookup does not depend on
potentially-malformed meta.ids; keep as_meta() if you still need meta elsewhere
but pass mam_torrent.id to r.get().secondary::<Torrent>(TorrentKey::mam_id,
mam_torrent.id).

In `@server/templates/pages/torrent.html`:
- Line 281: Replace the raw "->" arrow in the HTML template with an escaped
entity to avoid unescaped special characters; update the line that renders
field.field, field.from and field.to (the template containing {{ field.field }},
{{ field.from }}, {{ field.to }} and the literal "->") to use a safe HTML entity
such as &gt; or &rarr; instead of the raw ">"/dash arrow so the output becomes
escaped in the rendered <li>.

In `@server/templates/pages/torrents.html`:
- Around line 234-237: The external anchor elements rendering MaM and ABS links
(the tag using torrent.mam_id / mam_id and the tag using abs_url, abs_id from
abs_url.as_ref() and torrent.meta.ids.get(ids::ABS)) open in a new tab with
target=_blank but lack a rel attribute; update both anchor tags to include
rel="noopener noreferrer" alongside target=_blank to mitigate window.opener
attacks.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 8876cd37-ea3b-409d-9349-fa6abf286e3c

📥 Commits

Reviewing files that changed from the base of the PR and between 4e5aefe and d0f0ed6.

📒 Files selected for processing (16)
  • mlm_db/src/impls/categories.rs
  • mlm_db/src/impls/meta.rs
  • mlm_db/src/lib.rs
  • mlm_db/src/v18.rs
  • mlm_db/tests/meta_diff.rs
  • mlm_mam/src/search.rs
  • mlm_mam/src/user_torrent.rs
  • server/src/bin/libation_unmapped_categories.rs
  • server/src/linker/folder.rs
  • server/src/linker/libation_cats.rs
  • server/src/linker/mod.rs
  • server/src/web/pages/torrent.rs
  • server/templates/pages/torrent.html
  • server/templates/pages/torrent_mam.html
  • server/templates/pages/torrents.html
  • server/tests/linker_folder_test.rs

Comment on lines +838 to 1003
impl FromStr for Category {
type Err = String;

fn from_str(value: &str) -> Result<Self, Self::Err> {
let key = Category::normalize(value);

for category in [
Category::Fantasy,
Category::ScienceFiction,
Category::Romance,
Category::Historical,
Category::ContemporaryRealist,
Category::Mystery,
Category::Thriller,
Category::Crime,
Category::Horror,
Category::ActionAdventure,
Category::Dystopian,
Category::PostApocalyptic,
Category::MagicalRealism,
Category::Western,
Category::Military,
Category::EpicFantasy,
Category::UrbanFantasy,
Category::SwordAndSorcery,
Category::HardSciFi,
Category::SpaceOpera,
Category::Cyberpunk,
Category::TimeTravel,
Category::AlternateHistory,
Category::ProgressionFantasy,
Category::RomanticComedy,
Category::RomanticSuspense,
Category::ParanormalRomance,
Category::DarkRomance,
Category::WhyChoose,
Category::Erotica,
Category::Detective,
Category::Noir,
Category::LegalThriller,
Category::PsychologicalThriller,
Category::CozyMystery,
Category::BodyHorror,
Category::GothicHorror,
Category::CosmicHorror,
Category::ParanormalHorror,
Category::Lgbtqia,
Category::TransRepresentation,
Category::DisabilityRepresentation,
Category::NeurodivergentRepresentation,
Category::PocRepresentation,
Category::ComingOfAge,
Category::FoundFamily,
Category::EnemiesToLovers,
Category::FriendsToLovers,
Category::FakeDating,
Category::SecondChance,
Category::SlowBurn,
Category::PoliticalIntrigue,
Category::Revenge,
Category::Redemption,
Category::Survival,
Category::Retelling,
Category::Ancient,
Category::Medieval,
Category::EarlyModern,
Category::NineteenthCentury,
Category::TwentiethCentury,
Category::Contemporary,
Category::Future,
Category::AlternateTimeline,
Category::AlternateUniverse,
Category::SmallTown,
Category::Urban,
Category::Rural,
Category::AcademySchool,
Category::Space,
Category::Africa,
Category::EastAsia,
Category::SouthAsia,
Category::SoutheastAsia,
Category::MiddleEast,
Category::Europe,
Category::NorthAmerica,
Category::LatinAmerica,
Category::Caribbean,
Category::Oceania,
Category::Cozy,
Category::Dark,
Category::Gritty,
Category::Wholesome,
Category::Funny,
Category::Satire,
Category::Emotional,
Category::CharacterDriven,
Category::Children,
Category::MiddleGrade,
Category::YoungAdult,
Category::NewAdult,
Category::Adult,
Category::Audiobook,
Category::Ebook,
Category::GraphicNovelsComics,
Category::Manga,
Category::Novella,
Category::LightNovel,
Category::ShortStories,
Category::Anthology,
Category::Poetry,
Category::Essays,
Category::Epistolary,
Category::DramaPlays,
Category::FullCast,
Category::DualNarration,
Category::DuetNarration,
Category::DramatizedAdaptation,
Category::AuthorNarrated,
Category::Abridged,
Category::Biography,
Category::Memoir,
Category::History,
Category::TrueCrime,
Category::Philosophy,
Category::ReligionSpirituality,
Category::MythologyFolklore,
Category::OccultEsotericism,
Category::PoliticsSociety,
Category::Business,
Category::PersonalFinance,
Category::ParentingFamily,
Category::SelfHelp,
Category::Psychology,
Category::HealthWellness,
Category::Science,
Category::Technology,
Category::Travel,
Category::Mathematics,
Category::ComputerScience,
Category::DataAi,
Category::Medicine,
Category::NatureEnvironment,
Category::Engineering,
Category::ArtPhotography,
Category::Music,
Category::SheetMusicScores,
Category::FilmTelevision,
Category::PopCulture,
Category::Humor,
Category::LiteraryCriticism,
Category::CookingFood,
Category::HomeGarden,
Category::CraftsDiy,
Category::SportsOutdoors,
Category::Textbook,
Category::Reference,
Category::Workbook,
Category::GuideManual,
Category::LanguageLinguistics,
] {
if Category::normalize(category.as_str()) == key {
return Ok(category);
}
}

Category::from_legacy_label(value).ok_or_else(|| format!("Unknown category: {value}"))
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider using a static lookup map for category parsing.

The FromStr implementation iterates through all ~100 categories on each parse. For performance-critical paths, a LazyLock<HashMap<String, Category>> would provide O(1) lookup.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_db/src/impls/categories.rs` around lines 838 - 1003, The current
Category::from_str (impl FromStr for Category) repeatedly iterates ~100 enum
variants comparing Category::normalize(category.as_str()) which is O(n) per
parse; replace this with a static LazyLock<HashMap<String, Category>> built once
(key = Category::normalize(category.as_str()), value = category) and use
map.get(&key) to return Ok(cloned_category) or fall back to
Category::from_legacy_label(value) as before; ensure the static is initialized
with every variant used in the loop and that FromStr still returns Err = String
when neither the map nor from_legacy_label succeed.

Comment on lines +70 to +82
fn collect_json_files(dir: &Path, out: &mut Vec<PathBuf>) -> Result<()> {
for entry in fs::read_dir(dir).with_context(|| format!("failed to read {}", dir.display()))? {
let entry = entry?;
let path = entry.path();
if path.is_dir() {
collect_json_files(&path, out)?;
} else if path.extension().is_some_and(|ext| ext == "json") {
out.push(path);
}
}

Ok(())
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider adding symlink loop protection.

The recursive collect_json_files function follows directories without checking for symbolic links, which could cause infinite recursion if symlinks create cycles.

Consider using path.is_symlink() check or fs::symlink_metadata to detect and skip symlinks, or document that symlinks are not supported.

Suggested fix
 fn collect_json_files(dir: &Path, out: &mut Vec<PathBuf>) -> Result<()> {
     for entry in fs::read_dir(dir).with_context(|| format!("failed to read {}", dir.display()))? {
         let entry = entry?;
         let path = entry.path();
-        if path.is_dir() {
+        if path.is_symlink() {
+            continue; // Skip symlinks to avoid potential cycles
+        } else if path.is_dir() {
             collect_json_files(&path, out)?;
         } else if path.extension().is_some_and(|ext| ext == "json") {
             out.push(path);
         }
     }

     Ok(())
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fn collect_json_files(dir: &Path, out: &mut Vec<PathBuf>) -> Result<()> {
for entry in fs::read_dir(dir).with_context(|| format!("failed to read {}", dir.display()))? {
let entry = entry?;
let path = entry.path();
if path.is_dir() {
collect_json_files(&path, out)?;
} else if path.extension().is_some_and(|ext| ext == "json") {
out.push(path);
}
}
Ok(())
}
fn collect_json_files(dir: &Path, out: &mut Vec<PathBuf>) -> Result<()> {
for entry in fs::read_dir(dir).with_context(|| format!("failed to read {}", dir.display()))? {
let entry = entry?;
let path = entry.path();
if path.is_symlink() {
continue; // Skip symlinks to avoid potential cycles
} else if path.is_dir() {
collect_json_files(&path, out)?;
} else if path.extension().is_some_and(|ext| ext == "json") {
out.push(path);
}
}
Ok(())
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/bin/libation_unmapped_categories.rs` around lines 70 - 82,
collect_json_files currently descends into directories without checking for
symbolic links, risking infinite recursion on symlink cycles; update
collect_json_files to skip symlinks by using fs::symlink_metadata (or
path.is_symlink()) and test the file_type().is_symlink() before recursing, so
directories where metadata indicates a symlink are not traversed, and ensure
errors from symlink_metadata are handled with context similar to existing
read_dir calls.

Comment on lines +23 to 24
id: {{ mam_torrent.id }} {% if let Some(vip_status) = meta.vip_status %}{{ vip_status }}{% endif %}
<a href="https://www.myanonamouse.net/t/{{ mam_torrent.id }}" target=_blank class=btn>Open in MaM</a>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Harden external MaM link with rel="noopener noreferrer".

Line 24 opens an external URL with target=_blank without a rel attribute, which can expose the opener to window.opener attacks and referrer leakage.

Suggested fix
-    <a href="https://www.myanonamouse.net/t/{{ mam_torrent.id }}" target=_blank class=btn>Open in MaM</a>
+    <a href="https://www.myanonamouse.net/t/{{ mam_torrent.id }}" target=_blank rel="noopener noreferrer" class=btn>Open in MaM</a>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
id: {{ mam_torrent.id }} {% if let Some(vip_status) = meta.vip_status %}{{ vip_status }}{% endif %}
<a href="https://www.myanonamouse.net/t/{{ mam_torrent.id }}" target=_blank class=btn>Open in MaM</a>
id: {{ mam_torrent.id }} {% if let Some(vip_status) = meta.vip_status %}{{ vip_status }}{% endif %}
<a href="https://www.myanonamouse.net/t/{{ mam_torrent.id }}" target=_blank rel="noopener noreferrer" class=btn>Open in MaM</a>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/torrent_mam.html` around lines 23 - 24, The external
anchor that opens MaM in a new tab (the <a> element with href
"https://www.myanonamouse.net/t/{{ mam_torrent.id }}" and class "btn") uses
target=_blank but lacks a rel attribute, so update that anchor to include
rel="noopener noreferrer" to prevent window.opener attacks and referrer leakage
while preserving target=_blank behavior.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
server/src/web/pages/config.rs (1)

183-237: ⚠️ Potential issue | 🟠 Major

Edition filter fields accessed here will always be empty.

Due to TorrentFilter.edition being #[serde(skip)] in config.rs, all accesses to self.filter.edition.categories, self.filter.edition.languages, self.filter.edition.flags, and self.filter.edition.min_size/max_size will return default (empty) values. This appears to be an incomplete migration - should these access self.edition instead of self.filter.edition?

Suggested fix
             for cat in self
-                .filter
-                .edition
+                .edition
                 .categories
                 .audio

Apply similar changes throughout the function to use self.edition instead of self.filter.edition.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/web/pages/config.rs` around lines 183 - 237, The code is reading
edition-related fields from self.filter.edition which is #[serde(skip)] and thus
always empty; update every access in this block to use self.edition instead of
self.filter.edition (replace accesses to self.filter.edition.categories.audio,
self.filter.edition.categories.ebook, self.filter.edition.languages,
self.filter.edition.flags.as_search_bitfield(), and self.filter.edition.min_size
/ self.filter.edition.max_size with the corresponding self.edition.* references)
so the query parameters are built from the actual Edition state.
server/src/qbittorrent.rs (1)

168-169: ⚠️ Potential issue | 🟡 Minor

Retry logic discards success and may obscure errors.

If the first create_category fails, you retry and then map_err(|_| e)? returns the original error e even if the retry also failed. This loses the retry's error context. Additionally, if the retry succeeds, map_err does nothing but the original error is still captured.

Suggested fix
-    if let Err(e) = qbit.create_category(category, "").await {
-        qbit.create_category(category, "").await.map_err(|_| e)?;
-    }
+    if let Err(first_err) = qbit.create_category(category, "").await {
+        // Retry once; if both fail, report the second error
+        qbit.create_category(category, "").await?;
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/qbittorrent.rs` around lines 168 - 169, The current retry of
qbit.create_category(category, "").await discards the retry outcome and always
returns the original error `e`; remove the `map_err(|_| e)?` pattern and
explicitly handle the retry result: call qbit.create_category(category,
"").await a second time, and if it succeeds proceed normally, but if it fails
return the retry error (or wrap/combine both errors for context). Update the
logic around qbit.create_category and `category` so success from the retry is
honored and failure returns the actual retry error instead of the original `e`.
server/src/autograbber.rs (1)

430-443: ⚠️ Potential issue | 🟠 Major

Handle dry_run before unwrapping the write transaction.

rw_opt is None when dry_run is enabled, so add_metadata_only_torrent(rw_opt.unwrap(), ...) will panic for Cost::MetadataOnlyAdd.

Proposed fix
         if cost == Cost::MetadataOnlyAdd {
+            if dry_run {
+                continue 'torrent;
+            }
             let mam_id = torrent.id;
             add_metadata_only_torrent(rw_opt.unwrap(), torrent, meta)
                 .await
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/autograbber.rs` around lines 430 - 443, The code unwraps rw_opt
during Cost::MetadataOnlyAdd which will panic when dry_run makes rw_opt == None;
update the logic in the Cost::MetadataOnlyAdd branch (where
add_metadata_only_torrent is called) to check dry_run / rw_opt before unwrapping
and either skip the DB write when dry_run is true or handle the None case
explicitly: if rw_opt.is_none() then avoid calling add_metadata_only_torrent (or
call a dry-run-safe variant), otherwise call
add_metadata_only_torrent(rw_opt.unwrap(), torrent, meta). Keep the existing
error handling around add_metadata_only_torrent and the continue 'torrent; flow.
server/src/config_impl.rs (1)

991-1013: ⚠️ Potential issue | 🔴 Critical

Wrap uploaded_at in Some() to match the Option<Timestamp> field type.

TorrentMeta.uploaded_at is Option<Timestamp>, but default_meta() assigns Timestamp::now() directly. This is a type mismatch that would cause a compilation error.

Fix
-                uploaded_at: Timestamp::now(),
+                uploaded_at: Some(Timestamp::now()),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/config_impl.rs` around lines 991 - 1013, default_meta() is
constructing a TorrentMeta where the uploaded_at field is Option<Timestamp> but
it's assigned a raw Timestamp::now(); change the assignment to
Some(Timestamp::now()) so the types match; update the uploaded_at expression in
the default_meta function (referencing TorrentMeta and default_meta) to wrap
Timestamp::now() in Some(...) and keep the rest of the struct unchanged.
♻️ Duplicate comments (38)
server/templates/pages/selected.html (1)

171-171: ⚠️ Potential issue | 🟡 Minor

Add rel="noopener noreferrer" on external link.

The external MaM link opens in a new tab without rel attribute, which can expose the opener to window.opener attacks.

Suggested fix
-  <div><a href="https://www.myanonamouse.net/t/{{ torrent.mam_id }}" target=_blank>MaM</a></div>
+  <div><a href="https://www.myanonamouse.net/t/{{ torrent.mam_id }}" target=_blank rel="noopener noreferrer">MaM</a></div>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/selected.html` at line 171, Update the external MaM
anchor that uses the torrent.mam_id template variable to include rel="noopener
noreferrer" so the link (<a href="https://www.myanonamouse.net/t/{{
torrent.mam_id }}" target=_blank>) does not expose window.opener; locate the
anchor in the selected.html template and add the rel attribute alongside the
existing target attribute.
server/templates/pages/torrent_mam.html (1)

23-24: ⚠️ Potential issue | 🟡 Minor

Add rel="noopener noreferrer" on external MaM link.

Line 24 opens an external URL with target=_blank without a rel attribute, which can expose the opener to window.opener attacks.

Suggested fix
-    <a href="https://www.myanonamouse.net/t/{{ mam_torrent.id }}" target=_blank class=btn>Open in MaM</a>
+    <a href="https://www.myanonamouse.net/t/{{ mam_torrent.id }}" target=_blank rel="noopener noreferrer" class=btn>Open in MaM</a>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/torrent_mam.html` around lines 23 - 24, The external
MaM anchor that opens in a new tab (the <a
href="https://www.myanonamouse.net/t/{{ mam_torrent.id }}" target=_blank
class=btn> element) must include rel="noopener noreferrer" to prevent
window.opener attacks; update the anchor in the torrent_mam.html template to add
rel="noopener noreferrer" alongside the existing target attribute.
server/src/web/pages/search.rs (1)

48-51: ⚠️ Potential issue | 🟠 Major

Use stable mam_torrent.id for torrent lookup.

meta.mam_id() parses from the ids map and may return None if parsing fails, causing the secondary lookup to miss an existing DB row. Since mam_torrent.id (line 51) is already the authoritative MAM ID, use it consistently for both lookups.

Suggested fix
             let meta = mam_torrent.as_meta()?;
             let torrent = r
                 .get()
-                .secondary::<Torrent>(TorrentKey::mam_id, meta.mam_id())?;
+                .secondary::<Torrent>(TorrentKey::mam_id, Some(mam_torrent.id))?;
             let selected_torrent = r.get().primary(mam_torrent.id)?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/web/pages/search.rs` around lines 48 - 51, The secondary lookup
uses meta.mam_id() which can be None and miss existing rows; change the
secondary lookup to use the authoritative mam_torrent.id instead. Specifically,
update the call that sets torrent (the variable obtained via
r.get().secondary::<Torrent>(TorrentKey::mam_id, meta.mam_id())) to pass
mam_torrent.id (the same stable ID used by selected_torrent) so both lookups use
the same stable MAM ID.
mlm_parse/src/lib.rs (3)

72-74: ⚠️ Potential issue | 🟡 Minor

Restore fourth and fix eighth in EDITION_START_REGEX.

The regex at line 73 is missing fourth and spells eighth as eight. Multi-word prefixes like "Fourth Anniversary Edition" and "Eighth Revised Edition" won't match this branch.

Suggested fix
 static EDITION_START_REGEX: Lazy<Regex> = Lazy::new(|| {
-    Regex::new(r"(?i)((\d+(?:st|nd|rd|th)|first|second|third|fifth|sixth|seventh|eight|ninth|tenth|new|revised|updated)\s+(?:[a-z']+\s+)*(?:Edition|ed\.|utgåva)|(\w+?)\s+(?:Edition|ed\.|utgåva))").unwrap()
+    Regex::new(r"(?i)((\d+(?:st|nd|rd|th)|first|second|third|fourth|fifth|sixth|seventh|eighth|ninth|tenth|new|revised|updated)\s+(?:[a-z']+\s+)*(?:Edition|ed\.|utgåva)|(\w+?)\s+(?:Edition|ed\.|utgåva))").unwrap()
 });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_parse/src/lib.rs` around lines 72 - 74, The EDITION_START_REGEX is
missing "fourth" and mistakenly uses "eight" instead of "eighth" in the
ordinal-word alternation; update the alternation inside the Regex::new(...) for
static EDITION_START_REGEX to include "fourth" and correct "eight" to "eighth"
(so entries read
"...first|second|third|fourth|fifth|sixth|seventh|eighth|ninth|tenth...") so
multi-word prefixes like "Fourth Anniversary Edition" and "Eighth Revised
Edition" will match.

435-448: 🧹 Nitpick | 🔵 Trivial

Empty test functions won't catch regressions.

test_parse_series_pattern5_standalone_novel and test_parse_series_pattern6_collection contain only comments. While the comments explain limited practical use, consider either adding minimal assertions or removing the test stubs entirely to avoid false confidence in coverage.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_parse/src/lib.rs` around lines 435 - 448, These two test stubs
(test_parse_series_pattern5_standalone_novel and
test_parse_series_pattern6_collection) are empty and should either be removed or
turned into real tests: replace each stub with a minimal assertion that
exercises the parser under test (call the existing parsing function used by
other tests and assert expected output or None), or delete the test functions
altogether to avoid false coverage; ensure the updated tests follow the same
patterns as other tests in this file so they compile and actually validate
behavior.

141-147: ⚠️ Potential issue | 🟠 Major

Optional series-number patterns return None instead of Some((title, None)).

The ? operator on line 146 causes the function to return None when no series number is captured, even though the return type is Option<(&str, Option<f32>)>. Titles with a series name but no number (matching patterns 5, 6, or 11) will return None instead of Some((title, None)).

Suggested fix
-        let series_number = captures
-            .get(2)
-            .or(captures.get(4))
-            .or(captures.get(6))
-            .or(captures.get(7))
-            .or(captures.get(10))?;
-        let series_number = match series_number.as_str().to_lowercase().as_str() {
+        let series_number = captures
+            .get(2)
+            .or(captures.get(4))
+            .or(captures.get(6))
+            .or(captures.get(7))
+            .or(captures.get(10))
+            .and_then(|m| {
+                let n = match m.as_str().to_lowercase().as_str() {
+                    "one" | "i" => Some(1.0),
+                    "two" | "ii" => Some(2.0),
+                    "three" | "iii" => Some(3.0),
+                    "four" | "iv" => Some(4.0),
+                    "five" | "v" => Some(5.0),
+                    "six" | "vi" => Some(6.0),
+                    "seven" | "vii" => Some(7.0),
+                    "eight" | "viii" => Some(8.0),
+                    "nine" | "ix" => Some(9.0),
+                    "ten" | "x" => Some(10.0),
+                    other => other.parse::<f32>().ok(),
+                }?;
+                (n >= 0.0).then_some(n)
+            });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_parse/src/lib.rs` around lines 141 - 147, The current use of the ? on the
series_number capture causes early return when no series-number group is
present; change the logic around the captures.get(2).or(...).or(...) sequence so
it does not use ? but instead matches the Option value—if the capture is None,
return Some((title, None)) (i.e., preserve a present title with no numeric
series), otherwise proceed to parse series_number.as_str() as before; update the
code around the series_number variable assignment and the following match to
handle the Some(capture) / None case explicitly in the function that returns
Option<(&str, Option<f32>)>.
mlm_parse/tests/parse_tests.rs (1)

11-16: ⚠️ Potential issue | 🟡 Minor

Make this normalization check exact.

starts_with("amazing and strange") won't catch regressions in the volume/format cleanup. Assert the full normalized string to ensure the test fails if normalization changes.

Suggested fix
 #[test]
 fn test_normalize_title_variants() {
     let s = "The Amazing & Strange Vol. 2 (light novel)";
     let n = normalize_title(s);
-    // Expect articles removed, ampersand -> and, volume token removed and lowercased
-    assert!(n.starts_with("amazing and strange"));
+    // Expect: "The" removed, "&" -> "and", "Vol." removed, "(light novel)" kept, lowercased
+    assert_eq!(n, "amazing and strange 2 light novel");
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_parse/tests/parse_tests.rs` around lines 11 - 16, The test
test_normalize_title_variants uses starts_with which won't detect regressions in
format/volume cleanup; change the assertion to check the full normalized output
from normalize_title(s) using an exact equality (e.g., replace
assert!(n.starts_with("amazing and strange")) with an assert_eq! comparing n to
the expected full normalized string) so the test verifies removal of articles,
"&" -> "and", removal of "Vol. 2" and "(light novel)", and lowercasing.
server/src/linker/folder.rs (5)

34-37: ⚠️ Potential issue | 🟠 Major

Skip non-directory entries before descending into rip contents.

A regular file under rip_dir will cause read_dir(&folder.path()) at line 56 to fail and abort the entire linking run. This issue was flagged in a previous review but remains unaddressed.

Proposed fix
     let mut entries = read_dir(&l.rip_dir).await?;
     while let Some(folder) = entries.next_entry().await? {
+        if !folder.file_type().await?.is_dir() {
+            trace!(
+                "Skipping non-directory rip entry: {}",
+                folder.path().display()
+            );
+            continue;
+        }
         link_folder(&config, library, &db, folder).await?;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/folder.rs` around lines 34 - 37, The loop that iterates
entries from read_dir(&l.rip_dir) should skip non-directory entries before
calling link_folder, because calling read_dir(&folder.path()) later fails on
files; update the code around entries.next_entry().await? to check the
DirEntry's type (e.g., call folder.file_type().await? or metadata and test
.is_dir() or use folder.path().is_dir()) and continue for non-directories, then
only call link_folder(&config, library, &db, folder).await? for directory
entries.

415-436: ⚠️ Potential issue | 🟠 Major

Ebook files are never linked into the library path.

The linking loop only processes audio_files. The ebook_files are selected (line 392-396) but never actually linked to the library directory. This was flagged previously but remains unaddressed.

Proposed fix
-        for file in audio_files {
+        for file in audio_files.into_iter().chain(ebook_files.into_iter()) {
             let file_path = PathBuf::from(&file.file_name());
             let library_path = dir.join(&file_path);
             library_files.push(file_path.clone());
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/folder.rs` around lines 415 - 436, The loop that performs
linking only iterates over audio_files, so ebook_files are never linked; update
folder linking logic to include ebook_files by either iterating audio_files and
ebook_files together or running the same linking loop for ebook_files, ensuring
you push each ebook PathBuf into library_files and apply the same
LibraryLinkMethod handling (calls to hard_link, copy, symlink) using the same
variables/logic (e.g., create file_path from file.file_name(), compute
library_path = dir.join(&file_path), use download_path = file.path(), and branch
on library.options().method) so ebooks are linked into the library path the same
way audio files are.

222-222: ⚠️ Potential issue | 🟡 Minor

num_files undercounts when ebook files are present.

size and filetypes (computed via folder_file_stats) include ebook files, but num_files only counts audio files. This inconsistency was flagged previously but remains unaddressed.

Proposed fix
-        num_files: audio_files.len() as u64,
+        num_files: (audio_files.len() + ebook_files.len()) as u64,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/folder.rs` at line 222, The num_files field is currently
set to audio_files.len() as u64 which undercounts when ebook files exist; change
it to use the total file count returned by folder_file_stats (e.g., the
stats.total_files or stats.num_files value produced by folder_file_stats)
instead of audio_files.len(), converting that total to u64 so num_files
consistently reflects all files counted by folder_file_stats (ensure you
reference the folder_file_stats result variable, e.g., stats.total_files as
u64).

290-290: ⚠️ Potential issue | 🟡 Minor

Same num_files undercount issue in build_nextory_torrent.

Proposed fix
-        num_files: audio_files.len() as u64,
+        num_files: (audio_files.len() + ebook_files.len()) as u64,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/folder.rs` at line 290, In build_nextory_torrent the
num_files field is set from audio_files.len() which undercounts because the
torrent includes more files than just audio; change num_files to reflect the
actual files added (e.g., use the collection that holds all torrent entries such
as files.len(), torrent_files.len(), or the variable that accumulates added
files) instead of audio_files.len(), and update any casting to u64 accordingly
so num_files accurately represents the total files included in the torrent.

63-80: ⚠️ Potential issue | 🟡 Minor

Extension comparison is case-sensitive and doesn't handle dotted prefixes.

The comparison at lines 70 and 78 uses raw ext.to_string_lossy() without normalization. If config.audio_types contains "MP3" or ".mp3", it won't match a file with extension "mp3". The related select_format in common.rs handles this, but this code doesn't.

Suggested fix
     while let Some(entry) = entries.next_entry().await? {
-        match entry.path().extension() {
-            Some(ext) if ext == "json" => metadata_files.push(entry),
+        let ext = entry
+            .path()
+            .extension()
+            .map(|e| e.to_string_lossy().to_lowercase());
+        match ext.as_deref() {
+            Some("json") => metadata_files.push(entry),
             Some(ext)
                 if config
                     .audio_types
                     .iter()
-                    .any(|e| e == &ext.to_string_lossy()) =>
+                    .any(|wanted| wanted.trim_start_matches('.').eq_ignore_ascii_case(ext)) =>
             {
                 audio_files.push(entry)
             }
             Some(ext)
                 if config
                     .ebook_types
                     .iter()
-                    .any(|e| e == &ext.to_string_lossy()) =>
+                    .any(|wanted| wanted.trim_start_matches('.').eq_ignore_ascii_case(ext)) =>
             {
                 ebook_files.push(entry)
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/folder.rs` around lines 63 - 80, The extension comparison
in folder.rs (inside the match on entry.path().extension()) is case-sensitive
and doesn't handle leading dots; normalize extensions before comparing by
converting ext.to_string_lossy() to lower-case and stripping any leading '.'
(e.g., let normalized_ext =
ext.to_string_lossy().trim_start_matches('.').to_lowercase()), then compare
normalized_ext against normalized entries from config.audio_types and
config.ebook_types (either normalize config lists once or normalize each config
entry on the fly) so audio_files and ebook_files matching works for values like
"MP3" or ".mp3" (see select_format in common.rs for normalization approach).
mlm_db/src/v04.rs (1)

294-301: 🧹 Nitpick | 🔵 Trivial

Clarify that this downgrade conversion is unreachable.

Per project learnings, database migrations only progress forward, so this v18 → v04 conversion path is never executed. The unwrap() on line 297 is safe in practice but lacks documentation. Consider adding a comment or using unimplemented!() for consistency with other downgrade conversions (e.g., line 289).

Suggested improvement
+// Note: This conversion is only used for type compatibility; MLM migrations
+// are forward-only, so this path is never executed in practice.
 impl From<v18::ListItemTorrent> for ListItemTorrent {
     fn from(t: v18::ListItemTorrent) -> Self {
         Self {
-            mam_id: t.mam_id.unwrap(),
+            // mam_id is always Some in v18 ListItemTorrent when downgrading
+            mam_id: t.mam_id.expect("downgrade path is never executed"),
             status: t.status,
             at: t.at,
         }
     }
 }

Based on learnings: "In the MLM codebase, migrations only progress forward (upgrades). From implementations that convert from newer versions to older versions may legitimately use unimplemented!() for fields that do not exist in older versions because those paths are never executed."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_db/src/v04.rs` around lines 294 - 301, The From<v18::ListItemTorrent> for
ListItemTorrent conversion uses t.mam_id.unwrap() but this downgrade path is
unreachable because migrations only move forward; update the impl (the from
method for v18::ListItemTorrent) to reflect that by replacing the unwrap with an
explicit unimplemented!() (or at minimum add a clear comment above the impl
indicating the downgrade is unreachable) so it matches other downgrade
conversions (see the pattern used around the other downgrade impls) and removes
the implicit unsafe unwrap.
server/src/config.rs (1)

239-241: ⚠️ Potential issue | 🟠 Major

TorrentFilter.edition is always defaulted due to #[serde(skip)].

The #[serde(skip)] attribute means edition never receives deserialized values. However, server/src/web/pages/config.rs accesses self.filter.edition.categories, self.filter.edition.languages, etc., which will always be empty/default. This issue was flagged previously but remains with only a TODO comment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/config.rs` around lines 239 - 241, TorrentFilter.edition is being
ignored during deserialization because of #[serde(skip)], causing
self.filter.edition.* to always be default; remove the #[serde(skip)] on the
edition field (and/or replace with an appropriate serde attribute like
#[serde(default)] if you need missing-field defaults) so EditionFilter is
actually deserialized into TorrentFilter, ensure EditionFilter implements
Deserialize and TorrentFilter derives Deserialize, and verify
server/src/web/pages/config.rs usages (self.filter.edition.categories,
languages, etc.) now receive real values.
server/src/linker/mod.rs (1)

1-9: 🧹 Nitpick | 🔵 Trivial

Consider re-exporting from the folder module for consistency.

The folder module declares public items (like link_folders_to_library() and metadata structs) but unlike common, duplicates, and torrent, nothing is re-exported from it. For consistency and discoverability, consider adding re-exports of commonly used public items.

♻️ Suggested addition
 pub use self::common::{copy, file_size, hard_link, library_dir, map_path, select_format, symlink};
 pub use self::duplicates::{find_matches, rank_torrents};
 pub use self::torrent::{find_library, refresh_mam_metadata, refresh_metadata_relink, relink};
+pub use self::folder::link_folders_to_library;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/mod.rs` around lines 1 - 9, The folder module exports
public items (e.g., link_folders_to_library, FolderMetadata, or other metadata
structs) but they are not re-exported from the linker crate root; add a pub use
self::folder::{link_folders_to_library, /* add metadata struct names like
FolderMetadata, FolderInfo, etc. */} to the top-level re-exports so consumers
can import these commonly used symbols directly from the linker module (locate
the folder module and the public items declared there and include their exact
identifiers in the pub use list).
server/templates/pages/torrent.html (2)

84-86: ⚠️ Potential issue | 🟡 Minor

Add type attribute to the button element.

Per HTMLHint, <button> elements should have an explicit type attribute. Since this is in a form, use type="submit".

🔧 Proposed fix
       <form method=post>
-        <button name=action value=relink>relink</button>
+        <button type="submit" name=action value=relink>relink</button>
       </form>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/torrent.html` around lines 84 - 86, The button inside
the form that submits the relink action (button with name="action"
value="relink") is missing an explicit type attribute; update that <button>
element to include type="submit" so the form submission behavior is explicit and
satisfies HTMLHint (i.e., locate the form containing the relink button and add
type="submit" to the button element).

281-281: ⚠️ Potential issue | 🟡 Minor

Escape the > character in HTML output.

HTMLHint flags that special characters like > should be escaped. Use &gt; instead of ->.

🔧 Proposed fix
-        <li>{{ field.field }}: {{ field.from }} -> {{ field.to }}</li>
+        <li>{{ field.field }}: {{ field.from }} &gt; {{ field.to }}</li>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/torrent.html` at line 281, Replace the raw "->" arrow
in the template output with an escaped greater-than entity so HTML doesn't
contain an unescaped ">" character; update the <li> rendering that uses the
template variables field.field, field.from and field.to (the line currently
emitting "{{ field.field }}: {{ field.from }} -> {{ field.to }}") to use "&gt;"
instead of ">" (i.e., emit "{{ field.field }}: {{ field.from }} &gt; {{ field.to
}}").
server/src/web/pages/torrent_edit.rs (1)

44-44: ⚠️ Potential issue | 🟠 Major

Remove the hard MaM dependency from manual edit POST.

context.mam()? can fail even though this path no longer uses MaM data (Line 110 passes None), which can block metadata edits when MaM is unavailable.

🔧 Proposed fix
-    let _mam = context.mam()?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/web/pages/torrent_edit.rs` at line 44, The manual edit POST
currently calls context.mam()? which may return Err and block edits even though
MaM data isn't used (the POST later passes None at the path around the manual
edit handling); remove the hard MaM dependency by eliminating the call to
context.mam()? in the manual edit POST path — either remove the let _mam =
context.mam()? assignment entirely or replace it with a non-fallible lookup
(e.g., context.mam().ok()) and ensure the POST uses None as intended; update the
function handling in torrent_edit.rs (the manual edit POST branch) so it no
longer propagates MaM errors.
server/src/cleaner.rs (1)

47-55: ⚠️ Potential issue | 🔴 Critical

Prevent panic when the batch is empty.

Line 47 only returns for len() == 1, but run_library_cleaner can pass an empty batch. Then Line 51 (batch.remove(0)) will panic. The rank_torrents function returns an empty vector when passed an empty input.

🐛 Proposed fix
-    if batch.len() == 1 {
+    if batch.len() <= 1 {
         return Ok(());
     };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/cleaner.rs` around lines 47 - 55, The code can panic when batch is
empty because batch.remove(0) is called unguarded; update run_library_cleaner to
check for an empty batch and return early (e.g., if batch.is_empty() { return
Ok(()); }) before calling rank_torrents or, alternatively, after ranking verify
batch.is_empty() and return; ensure the logic around rank_torrents, the keep
variable (batch.remove(0)), and the subsequent loop over batch are only executed
when batch has at least one element to avoid remove(0) panics.
server/tests/linker_torrent_test.rs (1)

315-322: ⚠️ Potential issue | 🟠 Major

Do not ignore linker errors in this test.

This call discards failures with let _ =, so the test can pass while the linker path is broken.

✅ Proposed fix
-    let _ = link_torrents_to_library(
+    link_torrents_to_library(
         config.clone(),
         db.db.clone(),
         (qbit_config, &mock_qbit),
         &mock_mam,
     )
-    .await;
+    .await?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/tests/linker_torrent_test.rs` around lines 315 - 322, The test is
hiding failures by assigning the awaited result of link_torrents_to_library to
`_`; instead capture the returned Result from link_torrents_to_library (the call
in the test that currently uses `let _ = link_torrents_to_library(...).await;`)
and either assert it succeeded (e.g., assert!(result.is_ok()) or unwrap/expect
the Result) or propagate the error from the test by returning a Result and using
`?`; this ensures link_torrents_to_library's failures fail the test rather than
being ignored.
mlm_db/src/v10.rs (1)

384-392: 🧹 Nitpick | 🔵 Trivial

Make lossy source downgrade explicit.

The wildcard arm silently maps every non-Mam/Manual v18 source to Manual. While per learnings, downgrades are never executed, explicit variant handling would clarify provenance and catch future enum additions at compile time.

♻️ Suggested improvement
 impl From<v18::MetadataSource> for MetadataSource {
     fn from(t: v18::MetadataSource) -> Self {
         match t {
             v18::MetadataSource::Mam => Self::Mam,
             v18::MetadataSource::Manual => Self::Manual,
-            _ => Self::Manual,
+            // Newer variants map to Manual in older schema (downgrade path, never executed)
+            v18::MetadataSource::File => Self::Manual,
+            v18::MetadataSource::Match => Self::Manual,
         }
     }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_db/src/v10.rs` around lines 384 - 392, The From<v18::MetadataSource> for
MetadataSource impl currently uses a wildcard arm that silently maps any
non-Mam/Manual v18 variants to Manual; replace the wildcard with explicit match
arms for every v18::MetadataSource variant so the conversion is exhaustive (no
`_` arm)—this forces compile-time updates if v18 adds variants; update the match
in the impl From<v18::MetadataSource> for MetadataSource (function converting
v18::MetadataSource -> MetadataSource) to list each v18::MetadataSource variant
and map them intentionally (or use unreachable!()/panic!() for truly impossible
cases) instead of using `_`.
server/tests/linker_folder_test.rs (1)

354-354: ⚠️ Potential issue | 🔴 Critical

Remove the extra &Events::new() argument at line 354.

link_folders_to_library accepts exactly two arguments: config and db. All other calls in this file (lines 19, 65, 88, 113, 138, 189, 241, 286, 378, 418, 498) correctly pass only two arguments. This will fail to compile.

🐛 Proposed fix
-    link_folders_to_library(config.clone(), test_db.db.clone(), &Events::new()).await?;
+    link_folders_to_library(config.clone(), test_db.db.clone()).await?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/tests/linker_folder_test.rs` at line 354, The call to
link_folders_to_library is passing an extra third argument (&Events::new())
which causes a compile error because link_folders_to_library only accepts two
parameters (config and db); remove the trailing &Events::new() so the invocation
becomes link_folders_to_library(config.clone(), test_db.db.clone()). Ensure
other calls remain unchanged and use the same two-argument form.
mlm_mam/src/search.rs (1)

422-432: ⚠️ Potential issue | 🟡 Minor

Prevent empty ISBN/ASIN values from entering the IDs map.

parse_isbn can return empty strings (e.g., whitespace-only input or ASIN: with no value after), and those get inserted as IDs. Filter out empty post-trim values before returning.

🔧 Proposed fix
 fn parse_isbn(mam_torrent: &MaMTorrent) -> (Option<&str>, Option<&str>) {
-    let isbn_raw: &str = mam_torrent.isbn.as_deref().unwrap_or("");
-    let isbn = if isbn_raw.is_empty() || isbn_raw.starts_with("ASIN:") {
-        None
-    } else {
-        Some(isbn_raw.trim())
-    };
-    let asin = isbn_raw.strip_prefix("ASIN:").map(|s| s.trim());
-
-    (isbn, asin)
+    let isbn_raw = mam_torrent.isbn.as_deref().map(str::trim).unwrap_or("");
+    if let Some(rest) = isbn_raw.strip_prefix("ASIN:") {
+        let asin = rest.trim();
+        return (None, (!asin.is_empty()).then_some(asin));
+    }
+    let isbn = (!isbn_raw.is_empty()).then_some(isbn_raw);
+    (isbn, None)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_mam/src/search.rs` around lines 422 - 432, The parse_isbn function can
return Some("") for whitespace-only ISBNs or Some("") for an "ASIN:" prefix with
no value; update parse_isbn (referencing function parse_isbn and field
MaMTorrent.isbn) to trim the extracted isbn_raw, then only return Some(trimmed)
when trimmed is non-empty, and likewise map the ASIN via
strip_prefix(...).map(|s| s.trim()) but convert to None if the trimmed ASIN is
empty; ensure both isbn and asin are None for empty/whitespace-only values so
empty strings are never inserted into the IDs map.
server/src/web/pages/torrent.rs (1)

579-584: ⚠️ Potential issue | 🟡 Minor

Use mam_torrent.id directly for the local lookup.

The MaM result already carries the authoritative ID, but line 584 reparses it through meta.mam_id(). If as_meta() ever yields a missing or invalid ids::MAM, this page stops associating a known result with its local Torrent row.

💡 Proposed fix
         .filter(|t| Some(t.id) != meta.mam_id())
         .map(|mam_torrent| {
             let meta = mam_torrent.as_meta()?;
             let torrent = r
                 .get()
-                .secondary::<Torrent>(TorrentKey::mam_id, meta.mam_id())?;
+                .secondary::<Torrent>(TorrentKey::mam_id, mam_torrent.id)?;
             let selected_torrent = r.get().primary(mam_torrent.id)?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/web/pages/torrent.rs` around lines 579 - 584, The code re-derives
the MAM ID via meta.mam_id() when performing the local Torrent lookup, which can
fail if as_meta() yields an invalid/missing ids::MAM; instead use the
authoritative ID already present on mam_torrent (e.g. mam_torrent.id) when
calling r.get().secondary::<Torrent>(TorrentKey::mam_id, ...). Update the
closure using mam_torrent.as_meta() and replace the meta.mam_id() argument
passed to r.get().secondary with the direct mam_torrent.id (keeping the
as_meta() call only where its result is actually needed).
server/src/linker/duplicates.rs (1)

174-203: 🧹 Nitpick | 🔵 Trivial

Consider using tempfile crate for test directory management.

The current approach with temp_dir().join(format!(...)) works but doesn't guarantee cleanup on panic. The tempfile crate provides TempDir that auto-cleans.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/duplicates.rs` around lines 174 - 203, Replace the manual
temp dir handling in the test_find_matches test with tempfile::TempDir: create a
TempDir at the start, use tempdir.path().join("test.db") for db_path, remove the
explicit fs::remove_dir_all calls and let TempDir auto-cleanup when it drops;
keep the rest of the test (creating the DB, inserts, find_matches assertions,
and drop(db)) the same and reference TempDir in place of tmp_dir.
mlm_db/src/impls/meta.rs (1)

42-47: 🧹 Nitpick | 🔵 Trivial

Consider a more readable diff format for ids.

Using {:?} debug format for the ids map produces output like {"mam": "123", "isbn": "456"} which works but may be less readable in diffs compared to a formatted key-value list.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_db/src/impls/meta.rs` around lines 42 - 47, The diff currently uses debug
formatting for the ids map (self.ids / other.ids) when building TorrentMetaDiff
with field TorrentMetaField::Ids; replace that with a clearer, deterministic
formatter: convert the map to a sorted list of "key=value" pairs (or a pretty
JSON string) and join them (e.g., comma- or newline-separated) before assigning
to from/to so diffs are human-readable and stable across runs; implement this
formatting as a small helper (e.g., format_ids_map) and call it when
constructing the TorrentMetaDiff for TorrentMetaField::Ids.
server/src/lists/mod.rs (1)

171-177: ⚠️ Potential issue | 🟠 Major

Fix false "same torrent" detection when mam_id is None.

The early return triggers for unrelated torrents when both mam_id values are None, leaving stale torrent_id in place. Since torrent_id is now being set (line 176), use it for identity comparison.

💡 Suggested fix
-        if field.status == TorrentStatus::Existing && field.mam_id == torrent.mam_id {
+        let same_torrent = field
+            .torrent_id
+            .as_ref()
+            .is_some_and(|id| id == &torrent.id)
+            || (field.torrent_id.is_none()
+                && field.mam_id.is_some()
+                && field.mam_id == torrent.mam_id);
+        if field.status == TorrentStatus::Existing && same_torrent {
             return false;
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/lists/mod.rs` around lines 171 - 177, The early-return currently
treats two None mam_id values as the same torrent; update the identity check in
the block that inspects field.status == TorrentStatus::Existing so it only
returns false when the actual identity matches: if both mam_id are Some and
equal, or if the existing field.torrent_id equals torrent.id (use Option
equality on torrent_id) then return false; otherwise proceed to call
field.replace(ListItemTorrent { torrent_id: Some(torrent.id.clone()), mam_id:
torrent.mam_id, ... }). Ensure you reference field.mam_id, torrent.mam_id,
field.torrent_id and torrent.id when changing the condition.
server/src/autograbber.rs (3)

784-794: ⚠️ Potential issue | 🟡 Minor

Keep mam_id on Updated events when the MaM payload is absent.

Manual/folder updates can still carry a stored MaM ID in meta, but this currently emits None and weakens mam_id-indexed event history.

Proposed fix
-        let mam_id = mam_torrent.map(|m| m.id);
+        let mam_id = mam_torrent.map(|m| m.id).or_else(|| meta.mam_id());
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/autograbber.rs` around lines 784 - 794, The Updated event
currently sets mam_id from mam_torrent.map(|m| m.id) which becomes None when the
MaM payload is absent, losing the stored MaM ID from meta; change the logic that
computes mam_id (used in the call to write_event and Event::new for
EventType::Updated) to fall back to the stored meta MaM ID when mam_torrent is
None (e.g., use meta.mam_id or similar field), so the Event retains the original
mam_id even for manual/folder updates while keeping existing behavior when
mam_torrent is present.

734-737: ⚠️ Potential issue | 🟡 Minor

Skip blank owner names when backfilling linker.

Persisting Some("") blocks future backfills because later checks only test is_none().

Proposed fix
-    if linker_is_owner && torrent.linker.is_none()
-        && let Some(mam_torrent) = mam_torrent
+    if linker_is_owner
+        && torrent.linker.is_none()
+        && let Some(mam_torrent) = mam_torrent
+        && !mam_torrent.owner_name.trim().is_empty()
     {
         torrent.linker = Some(mam_torrent.owner_name.clone());
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/autograbber.rs` around lines 734 - 737, The backfill assigns
Some(mam_torrent.owner_name.clone()) even when owner_name is an empty string,
causing future backfills to skip since they only check torrent.linker.is_none();
update the condition in the block that uses linker_is_owner and mam_torrent so
you only set torrent.linker when mam_torrent.owner_name is non-empty (e.g.,
after trimming) — reference the variables/functionality: linker_is_owner,
torrent.linker, and mam_torrent.owner_name to locate and modify the check before
assigning.

685-689: ⚠️ Potential issue | 🟠 Major

Don't overwrite fresh IDs with stored ones.

This merge still lets torrent.meta.ids win on duplicate keys, so corrected IDs from the current source get reverted on every refresh.

Proposed fix
-    meta.ids.extend(torrent.meta.ids.clone());
+    for (key, value) in torrent.meta.ids.clone() {
+        meta.ids.entry(key).or_insert(value);
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/autograbber.rs` around lines 685 - 689, The merge currently does
meta.ids.extend(torrent.meta.ids.clone()), which lets stored/old IDs overwrite
corrected/current ones; change the merge to only insert IDs from
torrent.meta.ids when meta.ids does not already contain that key (e.g., iterate
torrent.meta.ids and use meta.ids.entry(key).or_insert(value) or equivalent),
leaving existing meta.ids intact; keep the existing behavior for meta.tags and
meta.description (the description fallback check can remain) but ensure you
update the merge site referencing meta.ids and torrent.meta.ids to prevent
overwriting fresh IDs.
server/src/linker/torrent.rs (4)

129-136: ⚠️ Potential issue | 🟠 Major

Apply selected-format filtering case-insensitively.

server/src/linker/common.rs::select_format() lowercases filenames before choosing .m4b/.epub, but this predicate still compares against file.name verbatim. TRACK.M4B can therefore select a format and still produce zero link plans.

Proposed fix
     let mut plans = vec![];
     for file in files {
+        let name_lower = file.name.to_lowercase();
         if !(selected_audio_format
             .as_ref()
-            .is_some_and(|ext| file.name.ends_with(*ext))
+            .is_some_and(|ext| name_lower.ends_with(ext))
             || selected_ebook_format
                 .as_ref()
-                .is_some_and(|ext| file.name.ends_with(*ext)))
+                .is_some_and(|ext| name_lower.ends_with(ext)))
         {
             continue;
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/torrent.rs` around lines 129 - 136, The filter in the for
loop that checks selected_audio_format and selected_ebook_format is performing
case-sensitive comparisons against file.name, so filenames like "TRACK.M4B" will
be missed; update the predicate used in the loop (the block referencing
selected_audio_format, selected_ebook_format and file.name) to compare against a
lowercased file name (e.g., compute let name_lc = file.name.to_lowercase() once)
or otherwise perform a case-insensitive ends-with check, so it matches the same
lowercase logic used by select_format().

191-194: ⚠️ Potential issue | 🟡 Minor

Preserve mam_id on RemovedFromTracker events.

The torrent row already knows its MaM ID; emitting None leaves mam_id-indexed event history incomplete.

Proposed fix
         events.push(Event::new(
             Some(qbit_torrent.hash.clone()),
-            None,
+            torrent.meta.mam_id(),
             EventType::RemovedFromTracker,
         ));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/torrent.rs` around lines 191 - 194, The RemovedFromTracker
event is being emitted with mam_id set to None, losing the torrent's mam_id;
when constructing the event with Event::new for EventType::RemovedFromTracker
replace the None mam_id with the torrent's mam_id (e.g., use qbit_torrent.mam_id
or the appropriate field on the torrent row, cloning or converting as needed) so
the event retains the mam_id for indexable history; adjust any option types if
necessary to match Event::new's signature.

76-77: ⚠️ Potential issue | 🟠 Major

Make disc-folder matching case-insensitive.

Lowercase cd1 / disc 2 folders are still flattened to the basename, which can collapse multi-disc releases into the same target path.

Proposed fix
 pub static DISK_PATTERN: Lazy<Regex> =
-    Lazy::new(|| Regex::new(r"(?:CD|Disc|Disk)\s*(\d+)").unwrap());
+    Lazy::new(|| Regex::new(r"(?i)(?:CD|Disc|Disk)\s*(\d+)").unwrap());
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/torrent.rs` around lines 76 - 77, The DISK_PATTERN static
is case-sensitive and misses lowercase folder names like "cd1" or "disc 2";
update the Regex used in the Lazy static (DISK_PATTERN) to be case-insensitive
(e.g., add the inline flag (?i) to the pattern or construct it with RegexBuilder
and case_insensitive(true)) so folder names like "cd1", "Disc 2", and "disk 1"
all match and prevent incorrect flattening of multi-disc releases.

87-90: ⚠️ Potential issue | 🟠 Major

Avoid panicking on empty torrent paths.

This expect() still crashes the linker task on malformed content paths.

Proposed fix
-    let file_name_component = components
-        .pop()
-        .expect("torrent file path should not be empty");
+    let Some(file_name_component) = components.pop() else {
+        return PathBuf::from(torrent_file_path);
+    };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/torrent.rs` around lines 87 - 90, The code currently uses
components.pop().expect(...) which panics for empty torrent_path; replace the
expect call with a safe check (e.g., match or pop().ok_or_else) and return or
propagate a controlled error instead of panicking. Locate the code that builds
components from torrent_path.components() and the use of file_name_component
(the expect call) and change it to handle the None case (log and return Err or
skip the malformed entry) so the linker task doesn't crash on malformed content
paths.
server/src/linker/common.rs (1)

58-66: ⚠️ Potential issue | 🟠 Major

Guard dir.file_name() before formatting.

Both branches still call dir.file_name().unwrap(). If sanitization leaves dir without a terminal component, the linker will panic instead of skipping the malformed item cleanly.

Proposed fix
     if let Some((edition, _)) = &meta.edition {
+        let Some(base_name) = dir
+            .file_name()
+            .map(|name| name.to_string_lossy().into_owned())
+        else {
+            return None;
+        };
         dir.set_file_name(
             sanitize_filename::sanitize(format!(
                 "{}, {}",
-                dir.file_name().unwrap().to_string_lossy(),
+                base_name,
                 edition
             ))
             .to_string(),
         );
     }
     if let Some(narrator) = meta.narrators.first()
         && !exclude_narrator_in_library_dir
     {
+        let Some(base_name) = dir
+            .file_name()
+            .map(|name| name.to_string_lossy().into_owned())
+        else {
+            return None;
+        };
         dir.set_file_name(
             sanitize_filename::sanitize(format!(
                 "{} {{{}}}",
-                dir.file_name().unwrap().to_string_lossy(),
+                base_name,
                 narrator
             ))
             .to_string(),
         );
     }

Also applies to: 68-78

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/common.rs` around lines 58 - 66, The code calls
dir.file_name().unwrap() in both branches when building a sanitized name (see
the dir.set_file_name calls and meta.edition usage); guard against
dir.file_name() being None before formatting and avoid unwraps: check
dir.file_name() once, and if it's None skip the malformed item (return/continue)
or use a safe fallback, then build the sanitized string from the valid OsStr and
call dir.set_file_name; apply the same guarded logic to the other branch that
handles edition formatting so neither branch can panic on a missing terminal
component.
server/src/main.rs (2)

616-618: ⚠️ Potential issue | 🟡 Minor

Fix the folder-linker context label.

This branch still reports link_torrents_to_library, which misclassifies folder-linker failures in logs and stats.

Proposed fix
-                let result = link_folders_to_library(config.clone(), db.clone())
-                    .await
-                    .context("link_torrents_to_library");
+                let result = link_folders_to_library(config.clone(), db.clone())
+                    .await
+                    .context("link_folders_to_library");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/main.rs` around lines 616 - 618, The context label for the
folder-linking call is incorrect: when awaiting link_folders_to_library(...) you
still call .context("link_torrents_to_library"), so failures are misclassified;
update the .context call to use the correct label (e.g.,
"link_folders_to_library") so the result from link_folders_to_library is logged
and classified properly instead of as link_torrents_to_library.

175-194: ⚠️ Potential issue | 🟠 Major

Don’t drain edition out of the live config.

These mem::take calls still clear the original *.edition fields after startup. server/src/config_impl.rs reads grab.edition in GoodreadsList::allow_audio / allow_ebook, so those helpers start seeing defaults instead of the configured filters.

Proposed fix
-        autograb.filter.edition = mem::take(&mut autograb.edition);
+        autograb.filter.edition = autograb.edition.clone();
...
-        snatchlist.filter.edition = mem::take(&mut snatchlist.edition);
+        snatchlist.filter.edition = snatchlist.edition.clone();
...
-            grab.filter.edition = mem::take(&mut grab.edition);
+            grab.filter.edition = grab.edition.clone();
...
-            grab.filter.edition = mem::take(&mut grab.edition);
+            grab.filter.edition = grab.edition.clone();
...
-        tag.filter.edition = mem::take(&mut tag.edition);
+        tag.filter.edition = tag.edition.clone();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/main.rs` around lines 175 - 194, The loops are draining the live
config by using mem::take (e.g. autograb.filter.edition = mem::take(&mut
autograb.edition)), which clears the original *.edition fields that are later
read by helpers like GoodreadsList::allow_audio / allow_ebook in config_impl.rs;
replace the mem::take calls in all loops over config.autograbs,
config.snatchlist, config.goodreads_lists (grab.filter.edition),
config.notion_lists (grab.filter.edition) and config.tags with non-destructive
copies (e.g. assign filter.edition = autograb.edition.clone() or otherwise
borrow/clone the value) so the original edition fields remain intact.
server/src/config_impl.rs (1)

377-393: ⚠️ Potential issue | 🟠 Major

Respect explicit media_type restrictions in allow_audio / allow_ebook.

These helpers can still return true when a grab explicitly excludes the format via edition.media_type.

Proposed fix
     pub fn allow_audio(&self) -> bool {
         self.grab
             .iter()
-            .any(|g| match g.edition.categories.audio.as_ref() {
-                None => true,
-                Some(c) => !c.is_empty(),
+            .any(|g| {
+                let media_ok = g.edition.media_type.is_empty()
+                    || g.edition.media_type.contains(&MediaType::Audiobook);
+                media_ok && g.edition.categories.audio.as_ref().is_none_or(|c| !c.is_empty())
             })
     }

     pub fn allow_ebook(&self) -> bool {
         self.grab
             .iter()
-            .any(|g| match g.edition.categories.ebook.as_ref() {
-                None => true,
-                Some(c) => !c.is_empty(),
+            .any(|g| {
+                let media_ok = g.edition.media_type.is_empty()
+                    || g.edition.media_type.contains(&MediaType::Ebook);
+                media_ok && g.edition.categories.ebook.as_ref().is_none_or(|c| !c.is_empty())
             })
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/config_impl.rs` around lines 377 - 393, The allow_audio and
allow_ebook helpers currently ignore explicit edition.media_type restrictions
and thus can return true even when a grab excludes that format; update both
methods (allow_audio and allow_ebook) to first check g.edition.media_type (or
its equivalent) and only consider the edition's categories if media_type is None
or includes the target type ("audio" / "ebook"); if media_type is present and
does not include the target type, treat that grab as not allowing it. Ensure you
reference g.edition.media_type alongside g.edition.categories.audio and
g.edition.categories.ebook in the closure used by self.grab.iter().any(...) so
grabs that explicitly exclude the format are ignored.

Comment on lines +635 to +636
"action" | "action adventure" => Some(Category::ActionAdventure),
"crime" | "true crime" => Some(Category::Crime),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

true crime incorrectly maps to Category::Crime instead of Category::TrueCrime.

The from_legacy_label function maps "true crime" to Category::Crime, but TrueCrime is a distinct category (see Line 582 in as_str() and the handling in from_old_category at Line 170/224).

💡 Proposed fix
-            "action" | "action adventure" => Some(Category::ActionAdventure),
-            "crime" | "true crime" => Some(Category::Crime),
+            "action" | "action adventure" => Some(Category::ActionAdventure),
+            "crime" => Some(Category::Crime),
+            "true crime" => Some(Category::TrueCrime),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"action" | "action adventure" => Some(Category::ActionAdventure),
"crime" | "true crime" => Some(Category::Crime),
"action" | "action adventure" => Some(Category::ActionAdventure),
"crime" => Some(Category::Crime),
"true crime" => Some(Category::TrueCrime),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_db/src/impls/categories.rs` around lines 635 - 636, The mapping in
from_legacy_label incorrectly maps the legacy string "true crime" to
Category::Crime; update from_legacy_label to return Some(Category::TrueCrime)
for the "true crime" branch (and ensure other branches like "crime" still map to
Category::Crime). Locate the match arms in from_legacy_label and change the arm
that currently lists "true crime" to produce Category::TrueCrime, keeping
consistency with as_str() and from_old_category handling.

Comment on lines +443 to +470
impl From<v18::TorrentMeta> for TorrentMeta {
fn from(t: v18::TorrentMeta) -> Self {
let mam_id = t.ids.get(ids::MAM).and_then(|id| id.parse().ok()).unwrap();

Self {
mam_id,
vip_status: t.vip_status,
cat: t.cat,
media_type: t.media_type,
main_cat: t.main_cat,
categories: vec![],
language: t.language,
flags: t.flags,
filetypes: t.filetypes,
num_files: 0,
size: t.size,
title: t.title,
edition: t.edition,
authors: t.authors,
narrators: t.narrators,
series: t.series,
source: t.source.into(),
uploaded_at: t
.uploaded_at
.unwrap_or(v03::Timestamp::from(UtcDateTime::UNIX_EPOCH)),
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential panic if MAM ID is missing or malformed in v18 metadata.

Line 445 uses .unwrap() on the MAM ID lookup and parse, which will panic if ids::MAM is not present in t.ids or the value cannot be parsed as u64. Consider using .unwrap_or_default() or propagating an error.

Additionally, Line 453 sets categories: vec![], discarding all v18 categories during conversion. If v18 has categories, they are lost.

💡 Proposed fix for mam_id panic
     fn from(t: v18::TorrentMeta) -> Self {
-        let mam_id = t.ids.get(ids::MAM).and_then(|id| id.parse().ok()).unwrap();
+        let mam_id = t.ids.get(ids::MAM).and_then(|id| id.parse().ok()).unwrap_or(0);

         Self {
             mam_id,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
impl From<v18::TorrentMeta> for TorrentMeta {
fn from(t: v18::TorrentMeta) -> Self {
let mam_id = t.ids.get(ids::MAM).and_then(|id| id.parse().ok()).unwrap();
Self {
mam_id,
vip_status: t.vip_status,
cat: t.cat,
media_type: t.media_type,
main_cat: t.main_cat,
categories: vec![],
language: t.language,
flags: t.flags,
filetypes: t.filetypes,
num_files: 0,
size: t.size,
title: t.title,
edition: t.edition,
authors: t.authors,
narrators: t.narrators,
series: t.series,
source: t.source.into(),
uploaded_at: t
.uploaded_at
.unwrap_or(v03::Timestamp::from(UtcDateTime::UNIX_EPOCH)),
}
}
}
impl From<v18::TorrentMeta> for TorrentMeta {
fn from(t: v18::TorrentMeta) -> Self {
let mam_id = t.ids.get(ids::MAM).and_then(|id| id.parse().ok()).unwrap_or(0);
Self {
mam_id,
vip_status: t.vip_status,
cat: t.cat,
media_type: t.media_type,
main_cat: t.main_cat,
categories: vec![],
language: t.language,
flags: t.flags,
filetypes: t.filetypes,
num_files: 0,
size: t.size,
title: t.title,
edition: t.edition,
authors: t.authors,
narrators: t.narrators,
series: t.series,
source: t.source.into(),
uploaded_at: t
.uploaded_at
.unwrap_or(v03::Timestamp::from(UtcDateTime::UNIX_EPOCH)),
}
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_db/src/v17.rs` around lines 443 - 470, The From implementation for
v18::TorrentMeta (fn from in impl From<v18::TorrentMeta> for TorrentMeta)
currently unwraps the MAM id and discards v18 categories; change the mam_id
extraction to handle missing/malformed ids safely (e.g., use .and_then(|id|
id.parse().ok()).unwrap_or_default() or map to a sensible fallback or Result if
you prefer propagation) instead of .unwrap(), and preserve v18 categories by
mapping t.categories (or the appropriate field in v18::TorrentMeta) into
TorrentMeta::categories (convert types if necessary) rather than setting
categories: vec![] so no category data is lost.

Comment on lines +18 to +30
#[test]
fn test_clean_name_initials_and_case() {
let mut name = "JRR TOLKIEN".to_string();
clean_name(&mut name).unwrap();
// JRR should remain as-is (algorithm doesn't split 3-letter initials); TOLKIEN should become Title case
assert!(name.starts_with("JRR"));
assert!(name.contains("Tolkien"));

let mut name2 = "john doe".to_string();
clean_name(&mut name2).unwrap();
// short lowercase words should be capitalized at start
assert!(name2.contains("John"));
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider using exact assertions for clean_name tests.

Using starts_with and contains assertions can miss regressions. For test robustness, assert the complete expected output.

Suggested improvement
 #[test]
 fn test_clean_name_initials_and_case() {
     let mut name = "JRR TOLKIEN".to_string();
     clean_name(&mut name).unwrap();
-    // JRR should remain as-is (algorithm doesn't split 3-letter initials); TOLKIEN should become Title case
-    assert!(name.starts_with("JRR"));
-    assert!(name.contains("Tolkien"));
+    assert_eq!(name, "JRR Tolkien");

     let mut name2 = "john doe".to_string();
     clean_name(&mut name2).unwrap();
-    // short lowercase words should be capitalized at start
-    assert!(name2.contains("John"));
+    assert_eq!(name2, "John Doe");
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mlm_parse/tests/parse_tests.rs` around lines 18 - 30, Update the
test_clean_name_initials_and_case to assert exact outputs instead of using
starts_with/contains: after calling clean_name(&mut name) assert_eq!(name, "JRR
Tolkien") (or the exact expected string produced by clean_name for the first
case), and after clean_name(&mut name2) assert_eq!(name2, "John Doe"); reference
the test function name and the mutable variables name and name2 and keep the
existing unwrap() calls.

Comment on lines +11 to +20
pub fn find_matches(db: &Database<'_>, torrent: &Torrent) -> Result<Vec<Torrent>> {
let r = db.r_transaction()?;
let torrents = r.scan().secondary::<Torrent>(TorrentKey::title_search)?;
let matches = torrents
.all()?
.filter_map(|t| t.ok())
.filter(|t| t.id != torrent.id && t.matches(torrent))
.collect();
Ok(matches)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider more targeted secondary index scan.

find_matches scans the entire title_search secondary index with .all()?.filter_map(...).filter(...). For large databases, this could be inefficient. Consider using .start_with(torrent.title_search.as_str())? to narrow the scan to torrents with matching normalized titles, similar to the pattern used in search_library (see server/src/lists/mod.rs line 126).

♻️ Suggested improvement
 pub fn find_matches(db: &Database<'_>, torrent: &Torrent) -> Result<Vec<Torrent>> {
     let r = db.r_transaction()?;
-    let torrents = r.scan().secondary::<Torrent>(TorrentKey::title_search)?;
-    let matches = torrents
-        .all()?
+    let matches = r
+        .scan()
+        .secondary::<Torrent>(TorrentKey::title_search)?
+        .start_with(&torrent.title_search)?
         .filter_map(|t| t.ok())
         .filter(|t| t.id != torrent.id && t.matches(torrent))
         .collect();
     Ok(matches)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/duplicates.rs` around lines 11 - 20, find_matches currently
scans the entire secondary index via torrents.all()? which is inefficient;
instead restrict the scan to records whose normalized title prefix matches the
incoming torrent by using the secondary index start_with API: call the same
secondary::<Torrent>(TorrentKey::title_search)? but replace .all()? with
.start_with(torrent.title_search.as_str())? and then keep the existing
filter_map and the remaining filter (t.id != torrent.id && t.matches(torrent)).
This mirrors the pattern used by search_library and will limit results to
candidates with the same normalized title before applying matches.

Comment on lines +439 to +442
let file = File::create(dir.join("metadata.json"))?;
let mut writer = BufWriter::new(file);
serde_json::to_writer(&mut writer, &metadata)?;
writer.flush()?;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Synchronous file I/O in async context may block the runtime.

File::create and BufWriter::flush are blocking operations used inside an async function. Consider using tokio::fs::File and tokio::io::AsyncWriteExt for consistency with other async file operations in this function.

Suggested fix
-        let file = File::create(dir.join("metadata.json"))?;
-        let mut writer = BufWriter::new(file);
-        serde_json::to_writer(&mut writer, &metadata)?;
-        writer.flush()?;
+        let metadata_json = serde_json::to_vec_pretty(&metadata)?;
+        tokio::fs::write(dir.join("metadata.json"), metadata_json).await?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/linker/folder.rs` around lines 439 - 442, The code currently uses
blocking File::create, BufWriter::new and serde_json::to_writer plus
writer.flush() inside an async context; replace this with async APIs by
serializing metadata to a byte buffer with serde_json::to_vec(metadata) and then
create the file with tokio::fs::File::create(dir.join("metadata.json")).await,
write the bytes using tokio::io::AsyncWriteExt::write_all and call
tokio::io::AsyncWriteExt::flush (or drop the file) to ensure data is persisted;
update the code references for File::create, BufWriter::new,
serde_json::to_writer and writer.flush() to use tokio::fs::File::create,
serde_json::to_vec, and tokio::io::AsyncWriteExt::write_all/flush so the
operation is non-blocking.

<div></div>
<div>{{ self::time(torrent.created_at) }}</div>
<div><a href="https://www.myanonamouse.net/t/{{ torrent.meta.mam_id }}" target=_blank>MaM</a></div>
<div><a href="https://www.myanonamouse.net/t/{{ torrent.mam_id }}" target=_blank>MaM</a></div>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add rel="noopener noreferrer" to external links opened with target=_blank.

The MaM link opens in a new tab but lacks the rel attribute to prevent reverse tabnabbing.

Suggested fix
-  <div><a href="https://www.myanonamouse.net/t/{{ torrent.mam_id }}" target=_blank>MaM</a></div>
+  <div><a href="https://www.myanonamouse.net/t/{{ torrent.mam_id }}" target=_blank rel="noopener noreferrer">MaM</a></div>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<div><a href="https://www.myanonamouse.net/t/{{ torrent.mam_id }}" target=_blank>MaM</a></div>
<div><a href="https://www.myanonamouse.net/t/{{ torrent.mam_id }}" target=_blank rel="noopener noreferrer">MaM</a></div>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/duplicate.html` at line 46, The external MaM anchor in
duplicate.html (the <a> tag with href "https://www.myanonamouse.net/t/{{
torrent.mam_id }}" and target=_blank) should include rel="noopener noreferrer"
to prevent reverse tabnabbing; update that anchor (and any other anchors using
target=_blank) to add rel="noopener noreferrer" so external tabs cannot gain
access to the opener.

Comment on lines +61 to 64
{% if let Some(mam_id) = duplicate_of.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank>MaM</a>{% endif %}
{% if let (Some(abs_url), Some(abs_id)) = (abs_url.as_ref(), duplicate_of.meta.ids.get(ids::ABS)) %}
<a href="{{ abs_url }}/audiobookshelf/item/{{ abs_id }}" target=_blank>ABS</a>
{% endif %}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add rel="noopener noreferrer" to external links.

Both the MaM and ABS anchors should include the rel attribute to mitigate reverse tabnabbing when using target=_blank.

Suggested fix
-    {% if let Some(mam_id) = duplicate_of.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank>MaM</a>{% endif %}
+    {% if let Some(mam_id) = duplicate_of.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank rel="noopener noreferrer">MaM</a>{% endif %}
     {% if let (Some(abs_url), Some(abs_id)) = (abs_url.as_ref(), duplicate_of.meta.ids.get(ids::ABS)) %}
-      <a href="{{ abs_url }}/audiobookshelf/item/{{ abs_id }}" target=_blank>ABS</a>
+      <a href="{{ abs_url }}/audiobookshelf/item/{{ abs_id }}" target=_blank rel="noopener noreferrer">ABS</a>
     {% endif %}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{% if let Some(mam_id) = duplicate_of.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank>MaM</a>{% endif %}
{% if let (Some(abs_url), Some(abs_id)) = (abs_url.as_ref(), duplicate_of.meta.ids.get(ids::ABS)) %}
<a href="{{ abs_url }}/audiobookshelf/item/{{ abs_id }}" target=_blank>ABS</a>
{% endif %}
{% if let Some(mam_id) = duplicate_of.mam_id %}<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank rel="noopener noreferrer">MaM</a>{% endif %}
{% if let (Some(abs_url), Some(abs_id)) = (abs_url.as_ref(), duplicate_of.meta.ids.get(ids::ABS)) %}
<a href="{{ abs_url }}/audiobookshelf/item/{{ abs_id }}" target=_blank rel="noopener noreferrer">ABS</a>
{% endif %}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/duplicate.html` around lines 61 - 64, The external
anchor tags rendering the MaM and ABS links need rel="noopener noreferrer" to
prevent reverse tabnabbing: update the anchor that uses duplicate_of.mam_id (the
MaM link) and the anchor that uses abs_url/abs_id (the ABS link) to add
rel="noopener noreferrer" alongside the existing target=_blank attribute so both
links include target and the rel attribute.

Comment on lines +40 to +41
<a href="/torrents/{{ mam_id }}" target=_blank>open</a>
<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank>MaM</a>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

fd errors.html

Repository: StirlingMouse/MLM

Length of output: 95


🏁 Script executed:

cat -n server/templates/pages/errors.html | sed -n '35,45p'

Repository: StirlingMouse/MLM

Length of output: 504


🏁 Script executed:

rg 'target=_blank' server/templates/pages/errors.html

Repository: StirlingMouse/MLM

Length of output: 208


🏁 Script executed:

rg 'target=_blank' --type html --type txt -A 1 -B 1

Repository: StirlingMouse/MLM

Length of output: 7372


Add rel attribute to new-tab links to prevent window.opener access.

Lines 40–41 use target=_blank without rel="noopener" (or rel="noopener noreferrer" for external links), allowing the opened page to access window.opener and perform tabnabbing attacks. The first link is internal; the second is external.

Suggested fix
-        <a href="/torrents/{{ mam_id }}" target=_blank>open</a>
-        <a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank>MaM</a>
+        <a href="/torrents/{{ mam_id }}" target=_blank rel="noopener">open</a>
+        <a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank rel="noopener noreferrer">MaM</a>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<a href="/torrents/{{ mam_id }}" target=_blank>open</a>
<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank>MaM</a>
<a href="/torrents/{{ mam_id }}" target=_blank rel="noopener">open</a>
<a href="https://www.myanonamouse.net/t/{{ mam_id }}" target=_blank rel="noopener noreferrer">MaM</a>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/errors.html` around lines 40 - 41, The two anchor tags
linking to "/torrents/{{ mam_id }}" and "https://www.myanonamouse.net/t/{{
mam_id }}" use target=_blank without a rel attribute; update the internal link
(href="/torrents/{{ mam_id }}") to include rel="noopener" and update the
external MaM link (href="https://www.myanonamouse.net/t/{{ mam_id }}") to
include rel="noopener noreferrer" so the opened pages cannot access
window.opener and to prevent tabnabbing.

<label>
<span>Language</span>
<select name=language>
<option value="">-- None --</option>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add selected attribute when field is None.

The new "-- None --" options should be explicitly selected when the corresponding field is None. Currently, if a torrent has no language or category set, no option will have the selected attribute, relying on browser default behavior to show the first option.

Proposed fix

For the language select (line 32):

-      <option value="">-- None --</option>
+      <option value="" {% if torrent.meta.language.is_none() %}selected{% endif %}>-- None --</option>

For the category select (line 41):

-      <option value="">-- None --</option>
+      <option value="" {% if torrent.meta.cat.is_none() %}selected{% endif %}>-- None --</option>

Also applies to: 41-41

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/templates/pages/torrent_edit.html` at line 32, The empty-option
(<option value="">) entries in torrent_edit.html need to render with the
selected attribute when the torrent's corresponding field is None; update the
language select's empty option and the category select's empty option so they
conditionally include selected (e.g., render selected when torrent.language is
None for the language select and when torrent.category is None for the category
select), ensuring the template uses your project's conditional syntax to output
selected on the <option value=""> elements.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant