Conversation
|
Caution Review failedFailed to post review comments 📝 WalkthroughWalkthroughAdds a full-stack Dioxus web frontend (mlm_web_dioxus) with UI components, pages, server functions, SSE endpoints, styling, Playwright e2e tests, build/workflow updates, HTML sanitization, and integrations with existing backend services (DB, MaM, qBittorrent). Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client (browser)
participant Web as mlm_web_dioxus (Server/Web)
participant DB as Database (mlm_db)
participant MaM as MaM (external)
participant QBit as qBittorrent (external)
Client->>Web: HTTP request /api (e.g., get_torrent_detail)
Web->>DB: read torrent + events
alt requires MaM data
Web->>MaM: query MaM APIs
MaM-->>Web: MaM results
end
alt qbit action requested
Web->>QBit: qBittorrent API call (category/tags/start/stop)
QBit-->>Web: qBittorrent response
end
Web-->>Client: JSON/SSR response
Note over Web,Client: SSE updates push via /dioxus-* endpoints for stats/events/selected/errors
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
✨ Finishing Touches
🧪 Generate unit tests (beta)
|
26bdd5e to
9f90d08
Compare
95ba3aa to
a1611e4
Compare
f3cc43a to
4bf7ac2
Compare
a1611e4 to
2f35a26
Compare
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
4bf7ac2 to
8539922
Compare
2f35a26 to
afa1069
Compare
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
50bbf7c to
c314b40
Compare
afa1069 to
6c4a9d6
Compare
c314b40 to
1b713a5
Compare
6c4a9d6 to
333eade
Compare
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 81
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.cargo/config.toml:
- Around line 5-11: Add documentation or a conditional fallback for the custom
linker settings so contributors aren't blocked by missing tools: either add a
README note (or a comment in this file) stating that the configuration under the
[target.x86_64-unknown-linux-gnu] and [target.aarch64-unknown-linux-gnu]
sections requires clang and mold to be installed, or change .cargo/config.toml
to make the linker/rustflags conditional (e.g., read an environment variable or
check existence of clang/mold before setting linker = "clang" and rustflags =
["-C", "link-arg=-fuse-ld=mold"]) so builds fall back to the system default
linker when those tools are absent.
In `@Dockerfile`:
- Around line 8-16: Consolidate the multiple consecutive RUN steps that create
crates (the repeated `cargo new --lib app/...` and `cargo new --bin app/...`
plus the `touch /app/mlm_web_dioxus/src/lib.rs`) into a single RUN invocation so
the image produces fewer layers; update the Dockerfile to execute all `cargo
new` commands and the `touch` in one shell block (e.g., with a heredoc or a
single shell command joined by &&) while preserving the same crate names
(`mlm_db`, `mlm_mam`, `mlm_parse`, `mlm_meta`, `mlm_core`, `mlm_web_askama`,
`mlm_web_dioxus`, `server`) and the creation of `src/lib.rs` for
`mlm_web_dioxus`.
In `@justfile`:
- Around line 73-85: The lint recipe currently runs `cargo clippy` which doesn't
fail on warnings; update the `lint` recipe so Clippy treats warnings as errors
by invoking `cargo clippy -- -D warnings` (or equivalently adding the `-D
warnings` flags), and keep `check` depending on `lint` so CI will fail when new
Clippy warnings are introduced; make the change in the justfile's `lint` recipe
and ensure `check` still references the `lint` and `fmt-check` recipes.
- Around line 19-20: Change the dx-dev recipe so the dx serve command binds to
localhost instead of 0.0.0.0: update the addr flag in the dx serve invocation
used in the dx-dev recipe (look for the "dx-dev" recipe and the "dx serve
--fullstack --web --addr 0.0.0.0" invocation) to use 127.0.0.1, and add a new
optional recipe (e.g., dx-dev-lan or dx-dev-remote) that uses --addr 0.0.0.0 for
explicit opt-in LAN access.
- Around line 30-32: The dx-build recipe currently runs an unconditional cargo
clean (cargo clean -p mlm_web_dioxus) which defeats incremental compilation;
remove that clean line from the dx-build target so dx-build only runs: cd
{{dioxus_dir}} && dx build --fullstack, and if desired add a separate opt-in
recipe (e.g., dx-clean) that runs cargo clean -p mlm_web_dioxus so cleaning
remains available but not baked into the normal build path.
In `@mlm_core/src/audiobookshelf.rs`:
- Line 632: The create_metadata function builds metadata using meta.description
but does not sanitize HTML like update_book; update create_metadata to pass
meta.description through clean_html (same sanitizer used in update_book) before
wrapping it in Some(...) so descriptions written to metadata.json are consistent
and sanitized; locate create_metadata and replace description:
Some(&meta.description) with a call that produces the cleaned string via
clean_html(meta.description) (ensuring types/ownership match the surrounding
code).
In `@mlm_core/src/autograbber.rs`:
- Around line 763-776: Logging uses two different arrow glyphs for diffs; make
them consistent. Pick one style (either " -> " or " → ") and update the debug
message that maps diff.iter().map(|field| format!(" {}: {} → {}", field.field,
field.from, field.to)) as well as the similar message in
update_selected_torrent_meta so both use the same arrow glyph; ensure you update
only the string literals in those format! calls to standardize the arrow across
both places.
In `@mlm_mam/src/api.rs`:
- Around line 6-11: The mam_cdn_base_url function is reading the wrong
environment variable (MLM_MAM_BASE_URL) so CDN requests can point to the main
site; change mam_cdn_base_url() to read MLM_MAM_CDN_BASE_URL instead and keep
the default as "https://cdn.myanonamouse.net" so it falls back to the CDN domain
when the CDN-specific env var is not set (update the std::env::var call inside
mam_cdn_base_url to reference "MLM_MAM_CDN_BASE_URL").
In `@mlm_web_askama/src/pages/config.rs`:
- Around line 1-2: The blanket module-level attribute #![allow(dead_code)] is
too broad; either replace it with targeted allows on the specific items that are
currently unused (e.g., add #[allow(dead_code)] to the specific
functions/structs/enums that need it) or keep the module attribute but add a
clear comment with a TODO explaining this is temporary during the Dioxus port
and must be removed once stabilised; update the attribute placement or comment
near the existing #![allow(dead_code)] to document intent and plan for removal.
In `@mlm_web_dioxus/.manifest.json`:
- Around line 1-64: Remove the generated mlm_web_dioxus/.manifest.json from
version control and prevent it from being re-added by updating .gitignore to
ignore that file (e.g., add a line matching mlm_web_dioxus/.manifest.json or a
pattern like **/.manifest.json), then run git rm --cached
mlm_web_dioxus/.manifest.json and commit the change; ensure no code changes
reference the manifest and confirm the file is removed from the next push.
In `@mlm_web_dioxus/assets/style.css`:
- Around line 33-35: Replace the non-standard :focus-ring pseudo-class with the
standard :focus-visible across the stylesheet; update every occurrence including
the selectors "&:focus-ring" (multiple places), "&&:focus-ring",
".column_selector_trigger:focus-ring", and any other "&:focus-ring" instances so
they read "&:focus-visible", "&&:focus-visible",
".column_selector_trigger:focus-visible", etc., ensuring consistent
keyboard-focus styling everywhere.
- Line 81: The .link CSS rule contains a typo "diplay: inline;" which browsers
ignore; change the property name to "display" in the rule for ".link" so the
intended inline layout is applied (update the CSS selector containing "diplay"
to use "display: inline;").
- Line 396: Replace the deprecated CSS declaration "word-break: break-word" with
"overflow-wrap: break-word" in the stylesheet; search for instances of the rule
(e.g., the selector block containing "word-break: break-word" around where the
property appears) and update them to use overflow-wrap: break-word (apply the
same change to the other occurrence noted in the file).
In `@mlm_web_dioxus/Cargo.toml`:
- Around line 47-49: The git dependencies native_db and qbit in Cargo.toml are
tracking branches which breaks reproducible builds; replace the
branch/default-branch pins with exact commit SHAs using rev = "<sha>" (take the
exact SHAs from the current Cargo.lock) so native_db uses rev instead of branch
= "0.8.x" and qbit uses rev instead of the implicit branch; update Cargo.lock by
running cargo update -p native_db --precise <sha> and cargo update -p qbit
--precise <sha> (or run a full cargo build) to ensure the lockfile matches the
new revs before releasing.
In `@mlm_web_dioxus/Dioxus.toml`:
- Around line 1-7: Update the Dioxus.toml to the 0.7 schema by replacing the
legacy keys: change application.public_dir to asset_dir = "../server/assets" (or
another dedicated static-assets subdirectory) to avoid exposing the whole
server, move web.app_title into the nested table as [web.app] title = "MLM", and
move watcher_index_on_404 into [web.watcher] as index_on_404 = true; ensure the
top-level [application] table keeps name = "mlm_web_dioxus" and remove the old
public_dir, app_title, and watcher_index_on_404 keys.
In `@mlm_web_dioxus/src/app.rs`:
- Around line 107-114: The functions EventsWithQuery and TorrentsWithQuery
currently take an intentionally unused parameter named segments which triggers
compiler warnings; rename the parameter to start with an underscore (e.g.,
_segments: Vec<String>) in both function signatures to silence the
unused-parameter warning while preserving the API and behavior in
EventsWithQuery and TorrentsWithQuery.
In `@mlm_web_dioxus/src/components/download_buttons.rs`:
- Around line 46-68: The closure handle_download in DownloadButtons is a
non-cloneable FnMut moved into multiple onclicks; replace it with a cloneable
callback (e.g., use_callback / an Rc-wrapped closure) so each onclick can call a
cloned handler. Specifically, create a handle_download_cb via use_callback (or
wrap in Rc) that captures clones of mam_id, props.on_status, props.on_refresh
and the loading signal, calls select_torrent_action asynchronously and updates
on_status/on_refresh/loading the same way as the original handle_download; then
use handle_download_cb.clone() (or pass the EventHandler) into each onclick
instead of moving the original handle_download. Ensure select_torrent_action,
on_status, on_refresh, mam_id and loading are referenced via cloned handles
inside the callback so the handler is Send + Clone across multiple handlers.
In `@mlm_web_dioxus/src/components/filter_controls.rs`:
- Around line 10-46: The dropdown stays open because there is no outside-click
handler; update ColumnSelector to add NodeRef(s) (e.g., for column_selector_menu
and/or column_selector_trigger) and install a document click listener in a
use_on_mount/use_effect hook that checks if the event.target is outside those
NodeRef nodes and calls is_open.set(false) to close the menu; ensure the
listener is removed on unmount (cleanup) and keep existing toggle behavior
intact—alternatively implement focusout/blur handlers on the menu container to
achieve the same effect.
In `@mlm_web_dioxus/src/components/task_box.rs`:
- Around line 33-35: The "run now" button in the TaskBox component currently
lacks an explicit type which causes it to act as a submit button when TaskBox is
placed inside a form; update the button element inside TaskBox (the element
using onclick: move |_| on_run.call(())) to include type="button" so clicking it
only triggers the on_run callback and does not submit the parent form.
In `@mlm_web_dioxus/src/config.rs`:
- Around line 366-384: The code currently calls update_torrent_meta repeatedly
and each call creates its own read-write transaction via
context.db().rw_async(), causing overhead; instead accumulate the changed items
(pairing torrent/mam_torrent/new_meta) into a Vec, open a single rw_async()
transaction once (await context.db().rw_async()), then iterate the Vec and call
update_torrent_meta passing that single transaction for each item (or implement
a batch helper like update_torrent_meta_batch that accepts the Vec and the
transaction), and finally commit/complete the transaction; ensure the call sites
use the same transaction object and handle/propagate errors as before.
- Around line 293-303: The code currently collects all results into memory via
the chain
context.db().r_transaction().scan().primary::<Torrent>().all().collect::<Result<Vec<_>,
_>>() which loads every Torrent into a Vec; change this to a streaming/batched
approach: iterate the scan result directly (avoid .collect) or use the DB
client's pagination API to fetch in chunks, processing each Torrent as you
receive it (e.g., replace the .all().collect(...) pattern with a streaming
iterator/for_each over the scan results or a loop that calls scan with
limit/offset), handle errors per-item with ServerFnError::new(e.to_string()),
and only buffer a small batch at a time instead of building a full Vec<Torrent>.
In `@mlm_web_dioxus/src/duplicate/components.rs`:
- Around line 147-172: When the query request key changes in the route-change
handling block (the one using parse_query_state, build_query_url,
last_request_key, and duplicate_data.restart()), also clear the selection state
so stale selections don't persist: call the appropriate methods on the selected
container (e.g., selected.clear() or selected.set(Vec::new())) and reset
last_selected_idx (e.g., last_selected_idx.set(None) or
last_selected_idx.set(-1) depending on its type) immediately after updating
sort/asc/filters/from/page_size and before restarting duplicate_data; make the
same change in the other identical block around lines 193-209 so both sync paths
reset selected and last_selected_idx when the route_request_key changes.
- Around line 174-178: The code is mutating state during render by calling
cached.set(Some(data.clone())) inside the render branch for value; move this
side-effect into a use_effect (or equivalent effect hook) so state updates occur
after render. Specifically, remove cached.set(...) from the block that reads
value (the code using the variable named value and the cached variable) and
instead create a use_effect that depends on the resource/readable (the same
value) and performs cached.set(Some(data.clone())) when the resource becomes
Some(Ok(data)); ensure the effect clones data and sets cached there to avoid
mutating state during render and potential rerender loops.
In `@mlm_web_dioxus/src/duplicate/mod.rs`:
- Around line 1-7: The module currently exposes its internal layout via `pub mod
components`, `pub mod server_fns`, and `pub mod types` even though the intended
public API is already re-exported (`DuplicatePage`, `apply_duplicate_action`,
`get_duplicate_data`, and `types::*`); change the declarations to private
modules (remove the `pub` from `mod components`, `mod server_fns`, and `mod
types`) so only the re-exports remain public and the internal module layout is
hidden, keeping the existing `pub use` lines intact.
In `@mlm_web_dioxus/src/duplicate/server_fns.rs`:
- Around line 186-260: The Replace branch (DuplicateBulkAction::Replace)
opens/commits DB transactions per torrent (context.db().r_transaction(),
context.db().rw_async(), rw.commit()) which hurts performance and prevents
atomic all-or-nothing updates; fix by moving the read/write transactions outside
the loop so you open a single r_transaction() and a single rw_async() before
iterating torrent_ids, collect/validate all SelectedTorrent inserts and
duplicate removals (using SelectedTorrent insertion and
rw.remove(duplicate_torrent)), then call rw.commit() once after the loop, and
keep the clean_torrent(...) calls either batched after commit or recorded to run
after successful commit to preserve atomicity.
- Around line 100-112: The DB scan currently silently drops errors via
filter_map(Result::ok) when building duplicates from
r.scan().primary::<DuplicateTorrent>(), so change the pipeline to surface/log
read errors instead of ignoring them: replace filter_map(Result::ok) with a step
that matches each Result and logs Err(e) (e.g. using tracing::error or your
project logger) with context (like "failed to read DuplicateTorrent") while only
passing Ok(t) forward; keep the subsequent filter using matches_filter and
filters unchanged so only successful reads are filtered and collected into
duplicates.
In `@mlm_web_dioxus/src/duplicate/types.rs`:
- Around line 3-16: DuplicatePageFilter's serde attribute uses rename_all =
"snake_case" which is inconsistent with DuplicatePageSort and the prevailing
codebase convention; change the serde attribute on the DuplicatePageFilter enum
from #[serde(rename_all = "snake_case")] to #[serde(rename_all = "lowercase")]
so its variants serialize the same way as DuplicatePageSort (e.g., CreatedAt ->
"createdat") and align with other PageSort enums like SelectedPageSort.
In `@mlm_web_dioxus/src/errors/components.rs`:
- Around line 78-81: The use_effect currently reads ERRORS_UPDATE_TRIGGER and
always calls errors_data.restart(), causing unnecessary reloads; update the
effect to read and compare the trigger's current value (from
ERRORS_UPDATE_TRIGGER.read()) against a stored previous value (e.g., in a local
mutable or state) and only call errors_data.restart() when the value
meaningfully changed, or implement a short debounce/timer inside the effect to
coalesce frequent updates before calling errors_data.restart(); locate this
logic in the use_effect that references ERRORS_UPDATE_TRIGGER and
errors_data.restart() and add the change-detection or debounce there.
- Around line 262-276: Extract the dense shift-click selection block into a
reusable helper function (e.g., update_row_selection or handle_shift_selection)
that takes the MouseEvent or a boolean shift flag, the current selected state
(ReadSignal/WriteSignal), last_selected_idx signal, the slice/all_row_ids, the
current row_id and index i, and returns/sets the new selected set and updates
last_selected_idx; then replace the inline closure logic in components.rs (the
move |ev: MouseEvent| { ... } closure) with a call to that helper and reuse the
helper from the identical logic in TorrentsPage and SelectedPage to remove
duplication.
In `@mlm_web_dioxus/src/events/server_fns.rs`:
- Around line 66-82: The DB lookup paths (the assignments to db_torrent and
db_replacement using r.get().primary(...) and
r.get().secondary(...).ok().flatten()) currently swallow errors via
.ok().flatten() and treat failures as "not found"; replace those .ok().flatten()
usages with error handling that maps Err(_) into an early return using
server_err(...) (or propagate the error via server_err) so real
DB/index/deserialization errors bubble up instead of being treated as missing
rows; update both the db_torrent block and the db_replacement extraction (the
replaced_with -> r.get().primary(...) call) to check the Result and call
server_err(...) on Err prior to continuing.
- Around line 27-28: Clamp the client-controlled page_size_val to a safe upper
bound (introduce a MAX_PAGE_SIZE constant, e.g. 1000) instead of using the raw
unwrap_or value, and when computing the end offset use saturating_add (e.g.
from_val.saturating_add(page_size_val)) to prevent overflow/wraparound; update
both the initialization of page_size_val and any code that computes offsets
from_val + page_size_val to use the clamped value and saturating_add so
oversized requests cannot produce huge or wrapped bounds.
In `@mlm_web_dioxus/src/home.rs`:
- Around line 250-287: Avoid cloning the whole vectors by iterating over
references instead of calling data.autograbbers.clone(),
data.snatchlist_grabbers.clone(), and data.lists.clone(); change each loop to
for grab in &data.autograbbers / &data.snatchlist_grabbers / &data.lists and
capture only the needed small pieces into the async closure (e.g., let index =
grab.index.clone() or let index = *grab.index if index is Copy) before creating
EventHandler::new(move |_| { spawn(async move { let _ = run_search(index).await;
}); }) or run_import(index).await; keep existing per-field clones
(last_run.clone(), result.clone(), display_name) but remove the outer Vec clone
to avoid the large allocation.
- Around line 140-196: The four no-op functions (run_torrent_linker,
run_folder_linker, run_downloader, run_abs_matcher) should use the same error
handling as run_search/run_import: after obtaining context via
crate::error::get_context(), check the specific trigger field
(context.triggers.torrent_linker_tx, folder_linker_tx, downloader_tx,
audiobookshelf_tx) and return Err(ServerFnError::new("Missing trigger: <name>"))
(or at minimum log an error) when the Option is None; otherwise send on the tx
and return Ok(()). Update those functions to mirror the conditional+Err pattern
used in run_search/run_import so missing triggers are surfaced consistently.
In `@mlm_web_dioxus/src/list.rs`:
- Around line 196-206: The code calls get_list_data(list_id, None) inside
use_server_future which disables server-side filtering while ListPageContent
later re-applies a show filter client-side, causing duplicate work or extra
payload; modify the use_server_future invocation to capture and forward the
current show filter (the same value ListPageContent uses) into get_list_data
instead of None, or remove server-side filtering logic from get_list_data if you
intend filtering purely client-side—locate use_server_future in this file and
the get_list_data signature to pass the filter param, and update ListPageContent
usage accordingly so server and client agree on where filtering occurs.
- Around line 376-472: The duplicate match/render logic for audio_torrent and
ebook_torrent should be extracted into a single helper to avoid repetition: add
a function (e.g., render_torrent_status) that accepts
&Option<ListItemTorrentDto>, a media_type label (&str) and the want flag (bool)
and returns an Element/rsx fragment; move the match on torrent.status and the
link/id/at rendering into that function, then replace the inline blocks that
reference audio_torrent/want_audio and ebook_torrent/want_ebook with calls to
render_torrent_status("audiobook", &item.audio_torrent, item.want_audio) and
render_torrent_status("ebook", &item.ebook_torrent, item.want_ebook)
respectively, preserving exact behavior for "Selected", "Wanted", "NotWanted",
"Existing", and the missing case.
- Around line 481-489: The UI doesn't refresh after a successful
mark_list_item_done call; update the success branch inside the spawn task to
trigger a data reload (e.g., call list_data.restart() or emit a shared signal
like refresh_trigger) so the ListPage re-fetches items; locate the async block
around mark_list_item_done(...) and modify the Ok(_) arm to call the refresh
mechanism before clearing marking_done.set(false), and ensure ListPage
subscribes to that same refresh_trigger (alongside STATS_UPDATE_TRIGGER) in its
use_effect so the UI updates immediately on success.
- Around line 86-110: The code is scanning all ListItem records using the
created_at secondary and then filtering by list.id in memory; replace the
DB-level scan with a secondary using the list_id key so the DB filters by list
first. In the chain starting from db.r_transaction()...scan(), call
.secondary::<ListItem>(ListItemKey::list_id, Some(list.id)) instead of
.secondary::<ListItem>(ListItemKey::created_at), remove the in-memory
.filter(|item| item.list_id == list.id), and keep the remaining show-based
filters (item_wants_audio, item_wants_ebook) and collection logic unchanged so
only matching list items are retrieved from the database.
In `@mlm_web_dioxus/src/lists.rs`:
- Line 187: The import use crate::app::Route; is placed at the bottom of
lists.rs; move it into the grouped imports at the top of the file (alongside the
other use ... lines) and remove the trailing import at line 187 so all imports
are consolidated near the top and there are no duplicate or out-of-order use
statements.
- Around line 39-48: The current DB scan pipeline assigned to db_lists uses
filter_map(|t| t.ok()) which silently drops any errors from the scan; update the
pipeline around
db.r_transaction().scan().secondary::<List>(ListKey::title).all() so that
scanning errors are captured and logged instead of discarded—e.g., transform
each item/result returned by .all() into a Result or pair, log any Err (using
your app logger / error reporting) with context like "list scan error", and only
collect the successful List values into db_lists; ensure you preserve the
existing ServerFnError mapping for fatal failures but explicitly handle and log
per-item scan errors rather than using filter_map.
In `@mlm_web_dioxus/src/main.rs`:
- Around line 114-120: The Context initialization wraps config in
Arc<Mutex<Config>> but the review suggests it's read-only; if config will never
be mutated at runtime, change the Context field type from Arc<Mutex<Config>> to
Arc<Config> and construct it with Arc::new(config) (remove the Mutex), then
update all usages that currently lock the mutex to use &*Arc or clone the Arc as
needed; if you do expect runtime updates (hot-reload), keep Arc<Mutex<Config>> —
ensure the chosen variant (Context.config type and how you construct it here)
matches throughout the codebase (functions/methods that access Context, Context
struct definition, and any places calling lock() on Context.config).
In `@mlm_web_dioxus/src/replaced/components.rs`:
- Around line 160-188: When the query key changes you must clear selection state
so stale selections can't reference out-of-range rows: inside the block that
compares last_request_key to route_request_key (the branch that updates
sort/asc/filters/from/page_size/show and restarts replaced_data), also reset the
selection signals/vars by clearing selected and resetting last_selected_idx to a
safe initial value; apply the identical change in the other/alternate sync block
that mirrors this update (the similar block around the other sync path) so both
code paths clear selected and last_selected_idx whenever last_request_key is
updated.
- Around line 141-149: The component is re-triggering use_server_future when
only column visibility toggles because show is included as an input but
get_replaced_data ignores it; remove show from the dependency inputs so toggling
columns won't refetch the dataset: update the use_server_future call that
constructs replaced_data to stop passing *show.read() (and stop wrapping it with
Some/whatever) and instead only pass the actual server-side inputs (sort, asc,
filters, from, page_size), or if server-side needs visibility, make
get_replaced_data actually use the show parameter; adjust the call/signature so
replaced_data, use_server_future, and get_replaced_data are consistent and show
only affects client-side rendering.
- Around line 406-427: The sort arrow direction is inverted because
get_replaced_data applies if asc { ord.reverse() } while the UI (SortHeader,
using the asc prop) expects asc=true to show an "up" (ascending) arrow; fix by
changing the server-side ordering logic in get_replaced_data so it reverses only
when asc is false (e.g., replace the if asc { ord.reverse() } else { ord } with
the inverse check), or alternatively flip the boolean passed into SortHeader/asc
consistently; update get_replaced_data, ReplacedPageSort handling, and any usage
of ord.reverse() to match the UI convention so displayed arrows match actual
ordering.
- Around line 190-194: The render currently calls cached.set(Some(data.clone()))
directly (inside the if let Some(Ok(data)) block using cached and value), which
must be moved into a use_effect hook; replace the direct state mutation in the
render path with a use_effect that depends on the resource signal (value) and,
when value.read() resolves to Some(Ok(data)), calls
cached.set(Some(data.clone())) inside the effect callback so the cache is
updated only after the resource successfully resolves and not during render.
In `@mlm_web_dioxus/src/replaced/server_fns.rs`:
- Around line 104-123: The sort direction is inverted in the replaced.sort_by
closure: it currently calls ord.reverse() when asc is true; change the
conditional so ascending returns ord and descending returns ord.reverse() (i.e.,
if asc { ord } else { ord.reverse() }) inside the closure that handles
ReplacedPageSort variants, updating the comparison logic used by
replaced.sort_by (the closure referencing ReplacedPageSort and the asc
variable); if the current inverted behavior was intentional, instead add a
clarifying comment near the replaced.sort_by block explaining the nonstandard
semantics.
In `@mlm_web_dioxus/src/search.rs`:
- Around line 211-229: The current torrents.sort_by closure allocates formatted
strings for each comparison; change it to compare series fields directly or
cache a sort key to avoid per-comparison allocations: implement a helper (e.g.,
compare_series(a: &Torrent, b: &Torrent) -> Ordering) that iterates over
a.series and b.series and compares (name, entries) tuples lexicographically
(falling back to media_type), or precompute a Vec of keys (Vec<(Vec<(String,
u32)>, MediaType, usize)>) once (using the existing series and media_type
fields) and sort by those keys before reconstructing the sorted torrents;
reference the existing torrents.sort_by closure, the series field and media_type
comparisons when making the change.
- Around line 152-238: The get_search_data server function currently contains
manual #[cfg(feature = "server")] / #[cfg(not(feature = "server"))] blocks
inside the #[server] macro wrapper; remove those inner cfg branches and leave
only the implementation body (everything inside the #[cfg(feature = "server")]
block) directly under #[server] in the get_search_data function, keeping all
uses of SearchQuery, SearchFields, Tor, context, mam.search, mapping logic
(map_search_torrent), sorting logic for "series", and the final Ok(SearchData {
torrents, total }); delete the unreachable!() non-server branch and the extra
cfg wrappers so the #[server] macro can generate client/server variants
correctly.
In `@mlm_web_dioxus/src/selected/components.rs`:
- Around line 57-62: The current pattern uses use_signal + use_effect + spawn to
call get_selected_user_info, which prevents server-side rendering; replace this
with a server-aware hook (use_server_future or use_resource) to fetch the data
on the server and hydrate on the client: create a server future that calls
get_selected_user_info().await.ok().flatten() and drive the component state from
that future instead of spawning in use_effect, update references to user_info to
read from the chosen hook’s result, and remove the use_effect/spawn block
(ensure the SelectedUserInfo type and any necessary imports are retained).
- Around line 400-425: Extract the duplicated shift-click range selection logic
into a reusable helper or hook (e.g., a function like handle_range_selection)
and replace the inline onclick closure in the torrents grid with a call to that
helper; the helper should accept the mutable selection signal (`selected`), the
last-selected-index signal (`last_selected_idx`), the full id list
(`all_row_ids`), the current index (`i`) and current row id (`row_id`), plus the
click event/modifiers, compute will_select, update the selection set across the
start..=end range when shift is held, and update last_selected_idx—then update
the onclick in selected/components.rs to delegate to this helper instead of
duplicating the logic used in ErrorsPage and TorrentsPage.
In `@mlm_web_dioxus/src/selected/server_fns.rs`:
- Around line 154-160: The subtraction in the closure from mam.user_info() can
produce a negative value before casting to u64 (user_info.uploaded_bytes -
user_info.downloaded_bytes - downloading_size) / config.min_ratio; change the
mapping in the mam.user_info().await.ok().map(...) block to compute the
intermediate as a signed/float value, clamp it to a non-negative value (e.g.,
max(0.0, computed_value) or check if negative and use 0.0) before converting to
u64, and then pass that clamped u64 into
mlm_db::Size::from_bytes(...).to_string() so remaining_buffer never becomes a
wrapped/huge value.
- Around line 99-120: The sort ordering is inverted: in the closure passed to
torrents.sort_by (matching on SelectedPageSort) the boolean asc is applied
incorrectly by calling ord.reverse() when asc is true; change the branch so that
when asc is true you return ord, otherwise return ord.reverse(). Update the same
logic in the other affected functions (the equivalent sort_by closures in
torrents/server_fns.rs and replaced/server_fns.rs) to mirror the correct
behavior used in duplicate/server_fns.rs and errors/server_fns.rs so the UI ↑/↓
indicator matches the actual order.
In `@mlm_web_dioxus/src/sse.rs`:
- Around line 30-85: The current setup_sse leaks EventSource and Closure handles
every time it's called (connect_sse and connect_sse_data), causing duplicate
subscriptions on repeated calls; make setup_sse idempotent by tracking whether
connections are already established (e.g. add a static flag/registry such as a
single AtomicBool or a Once/OnceCell or a static HashSet of URLs) and
early-return if already initialized, or store the leaked handles in a static
container so you only create and forget them once; update the functions
connect_sse and connect_sse_data (or the top-level setup_sse) to consult that
static marker/registry before creating new EventSource/Closure for a given URL
to prevent multiple subscriptions.
In `@mlm_web_dioxus/src/ssr.rs`:
- Around line 103-112: The router currently routes /dioxus-selected-updates and
/dioxus-errors-updates to dioxus_stats_updates, causing wrong SSE events; add
two dedicated handler functions (e.g., dioxus_selected_updates and
dioxus_errors_updates) that mirror the pattern used in dioxus_stats_updates but
subscribe to the selected-update and errors-update streams (the same streams
triggered by trigger_selected_update and trigger_errors_update in sse.rs), and
update the Router to use get(dioxus_selected_updates) for
"/dioxus-selected-updates" and get(dioxus_errors_updates) for
"/dioxus-errors-updates" so each endpoint emits the correct events.
- Around line 76-98: The qBittorrent polling loop currently swallows errors from
qbit::Api::new_login_username_password and qbit.torrents by using `else {
continue; }`, which loses debugging context; change each `let Ok(...) = ... else
{ continue; }` to explicitly match the Err case and emit a debug/trace log with
the error (e.g., use tracing::debug! or log::debug!) including the qbit_conf.url
and the error, then continue; keep the successful branches as-is so
progress.push((mam_id, ...)) still runs when requests succeed.
In `@mlm_web_dioxus/src/torrent_detail/components.rs`:
- Around line 941-942: The current check computing is_paused from the formatted
string is fragile and incorrect; instead, add an is_paused: bool field to
QbitData (or keep the TorrentState enum variant alongside the display string)
and set it from the original TorrentState value before calling
format_qbit_state(); then replace the string-contains logic that computes
is_paused (the line using qbit.torrent_state.to_lowercase().contains(...)) with
the new boolean field (or use the preserved enum variant) so callers use a typed
boolean rather than string matching; update the QbitData construction site where
format_qbit_state() is called and any consumers of qbit.torrent_state to use the
new is_paused or enum field.
- Around line 367-375: The MaM description is rendered with dangerous_inner_html
without sanitization; update the rendering path that uses mam.description inside
the Details component to sanitize the string first (reuse the existing
clean_html() sanitizer used for torrent.description/server_fns.rs) before
passing it to dangerous_inner_html, e.g., call clean_html(mam.description) (or
equivalent helper) where mam.description is read from the SearchTorrent struct
so the output passed to dangerous_inner_html is safe.
In `@mlm_web_dioxus/src/torrent_detail/server_fns.rs`:
- Around line 250-286: The function get_downloaded_torrent_detail currently
performs a write by clearing torrent.replaced_with when the replacement is
missing (replacement_missing) via db.rw_async(), rw.upsert, and rw.commit;
extract that side-effect into a dedicated cleanup function (e.g.,
clear_stale_replaced_with(db, torrent_id) or Db::cleanup_stale_replacements) and
call it from an explicit maintenance path or make the cleanup conditional via an
explicit parameter (e.g., allow_cleanup: bool) so the read function only reads
by default; update references to replaced_with and the rw_async/upsert/commit
logic to live in the new function and document the behavior where it’s invoked.
- Around line 115-248: The SearchTorrent construction in other_torrents_data
duplicates map_search_torrent logic; replace the manual mapping with a call to
the existing map_search_torrent function from search.rs. Retrieve the same
inputs map_search_torrent needs (call mam_torrent.as_meta() to get meta, compute
can_wedge/media_*/old_category or pass config/meta/mam_torrent/r/optional
torrent/selected as required), import or reference map_search_torrent, and
return its Result-wrapped SearchTorrent for each mam_torrent instead of
reconstructing the struct inline in other_torrents_data.
- Around line 325-335: The code assigns library_files using blocking
std::fs::read_dir inside async code; replace that pipeline with an async read
using tokio::fs::read_dir: if let Some(p) = torrent.library_path.as_ref() { let
mut rd = tokio::fs::read_dir(p).await?; let mut files = Vec::new(); while let
Some(entry) = rd.next_entry().await.transpose()? { files.push(entry.path()); }
library_files = files; } else { library_files = Vec::new(); } Ensure you handle
Result/Option errors similarly to the original (e.g., convert errors to
.ok()/.unwrap_or_default or propagate with ?), and update the function signature
to be async if necessary; reference the library_files variable and
torrent.library_path to locate the replacement spot.
- Around line 936-997: remove_seeding_files_action is using blocking std::fs
operations inside an async fn; replace those with async Tokio equivalents: use
tokio::fs::read_dir (or tokio::fs::read_dir wrapped in an async collection loop)
to build library_files_set, replace path.exists() with tokio::fs::metadata or
tokio::fs::try_exists, and replace std::fs::remove_file with
tokio::fs::remove_file (or call blocking ops via tokio::task::spawn_blocking if
you must keep synchronous APIs). Update the logic that builds library_files_set,
the existence check before deletion, and the deletion loop (referencing
qbit_torrent.save_path, files, files_to_remove, and the
remove_seeding_files_action function) to await the async fs calls and propagate
errors via ServerFnError as currently done.
In `@mlm_web_dioxus/src/torrent_edit.rs`:
- Around line 221-228: You are loading the Torrent via a read transaction before
acquiring the write lock which risks overwriting a concurrent upsert; move the
fetch into the write transaction instead: call
context.db().rw_async().server_err()? (or the codepath that acquires the write
transaction) and then perform
.get().primary::<mlm_db::Torrent>(form.torrent_id.clone()).server_err()?
.ok_or_server_err("Torrent not found")? inside that same rw transaction so the
rw snapshot includes the latest data; apply the same change to the other
occurrence that currently uses context.db().r_transaction() before rw_async().
- Around line 397-400: The form_state is being overwritten on every keystroke
because the code in the component unconditionally sets form_state to the fetched
data when data_res changes; update the logic in the component that uses
data_res, form_state and torrent_id so that you only initialize form_state once
or when the loaded torrent_id changes (e.g., only call
form_state.set(Some(data.clone())) if form_state.read().is_none() or if the
current loaded id differs from a saved previous_torrent_id state), rather than
on every render comparison of form_state and data.
In `@mlm_web_dioxus/src/torrents/components.rs`:
- Around line 55-95: TorrentsPageColumns' get and set methods duplicate the same
match-arm mapping for TorrentColumn; replace that boilerplate with a small macro
that takes a list of (Variant, field) pairs and generates both the get(&self,
col: TorrentColumn) -> bool and set(&mut self, col: TorrentColumn, enabled:
bool) implementations. Create a macro (e.g., torrents_columns! or
impl_torrents_columns!) that iterates the pairs and emits the match arms for
TorrentColumn::Category => self.category (and assignment for set), then invoke
it for the TorrentsPageColumns impl to replace the explicit match blocks in the
get and set functions.
In `@mlm_web_dioxus/src/torrents/mod.rs`:
- Around line 6-8: Replace the glob re-exports in this module (currently `pub
use server_fns::*;` and `pub use types::*;`) with explicit exports to match the
pattern used in `events` and `selected`; for example, export `get_torrents_data`
and `apply_torrents_action` from the `server_fns` module and export the main
types `TorrentsData`, `TorrentsPageFilter`, and `TorrentsPageSort` from the
`types` module (keeping `pub use components::TorrentsPage;` as-is), so only the
intended public API is re-exported instead of everything in those modules.
In `@mlm_web_dioxus/src/torrents/server_fns.rs`:
- Around line 206-212: The sort direction is inverted: in the closure used to
sort filtered_torrents (the match on TorrentsPageSort and subsequent if asc {
ord.reverse() } else { ord }), remove the inversion so that when asc is true the
comparator returns ascending order and when false it returns descending; update
the sorting logic in the sort_by / sort_by_key block that references
TorrentsPageSort and the asc variable (and the
filtered_torrents.sort_by_key(|(_, score)| ...) path if needed) to ensure the
SortHeader's "↑" maps to ascending results and "↓" to descending.
In `@mlm_web_dioxus/src/utils.rs`:
- Around line 45-55: The timestamp returned by format_timestamp_db is formatted
from the raw OffsetDateTime without converting to local time, causing
inconsistent timezones versus format_timestamp and format_datetime; update
format_timestamp_db (and its use of DbTimeValue::as_timestamp and
DATETIME_FORMAT) to convert the OffsetDateTime to the local offset before
zeroing nanoseconds and formatting (mirror the local-time conversion used in
format_timestamp/format_datetime), or if UTC-only output was intended, add a
clear comment stating that behavior.
In `@playwright.config.ts`:
- Around line 12-14: The Playwright config only defines the projects array with
a single project using browserName 'chromium', so add additional project objects
for Firefox and WebKit to enable cross-browser tests: update the projects array
(symbol: projects) to include entries with name 'firefox' and 'webkit' and
use.browserName set to 'firefox' and 'webkit' respectively (matching the
existing { name: 'chromium', use: { browserName: 'chromium' } } pattern),
ensuring any shared use settings are reused or copied as needed.
In `@server/assets/style.css`:
- Around line 282-299: The new entrance animation on .column_selector_menu
(animation: column-menu-in 150ms) and the other animations introduced later
(stale-table fade and spinner add) should respect prefers-reduced-motion: add a
media query `@media` (prefers-reduced-motion: reduce) that disables
animations/transitions for these UI pieces by setting animation: none!important
and transition-duration: 0s/0ms (and remove any transform keyframe effects) for
.column_selector_menu, the stale-table selector(s) and the spinner selector(s)
referenced in the later block (lines ~475-519); this ensures users who opt out
of motion get immediate, non-animated state while leaving the original
animations intact for others.
- Around line 709-711: In the .Torrents rule replace the deprecated declaration
"word-break: break-word" with a supported overflow-wrap declaration to preserve
the same wrapping behavior across engines; update the CSS inside the .Torrents
block by removing "word-break: break-word" and adding "overflow-wrap:
break-word;" (or "overflow-wrap: anywhere;" if you want more aggressive
breaking) so stylelint no longer flags the rule and wrapping remains
predictable.
- Around line 788-790: The current nested selector expands to
`.Torrents:not(.nohover, .Torrents) > div:hover`, which can never match; fix it
by removing the redundant `.Torrents` from the :not so the hover can apply —
change the selector to `.Torrents:not(.nohover) > div:hover` (or, if the
intention was to exclude individual rows, use `.Torrents >
div:not(.nohover):hover`) in the rule that currently contains `&:not(.nohover,
.Torrents)>div:hover`.
In `@server/src/bin/create_test_db.rs`:
- Around line 60-140: The timestamps are all set with Timestamp::now() (see
created_at, meta.uploaded_at and replaced_with) causing nearly identical times;
instead capture a base timestamp once (e.g., let base_ts = Timestamp::now())
outside the loop and apply deterministic offsets per iteration using Duration
(or the Timestamp API) to create variance (e.g., base_ts - seconds(i) or base_ts
+ seconds(i)) and use those adjusted values for meta.uploaded_at, created_at and
any replaced_with timestamps so tests can exercise time-based sorting/filtering
reliably.
In `@server/src/bin/mock_server.rs`:
- Around line 36-102: In qbit_torrents_info remove the duplicate "upspeed" field
from the returned JSON object so the key appears only once; locate the two
entries with "upspeed" (both set to 0) in the JSON literal inside
qbit_torrents_info and delete one of them (keep the appropriate integer-typed
uppedspeed entry and remove the redundant occurrence) so the JSON has a single
"upspeed" key.
In `@server/tests/linker_folder_test.rs`:
- Line 4: The test file removed the Events import but still calls Events::new()
in multiple places; restore the import or fully qualify the symbol. Re-add the
use statement (e.g., use mlm_core::events::Events;) at the top of
server/tests/linker_folder_test.rs so Events::new() resolves, or alternatively
replace each unqualified Events::new() call with the fully-qualified path
(mlm_core::events::Events::new()) to fix compilation; ensure references in test
helpers and assertions remain consistent with the chosen approach.
In `@tests/e2e/config.spec.ts`:
- Line 3: Replace the hardcoded BASE constant and full-URL uses in this test
with Playwright's configured baseURL: remove or stop using the const BASE =
'http://localhost:3998' and change any page.goto calls that use BASE (e.g.,
page.goto(`${BASE}/config`)) to relative paths like page.goto('/config') so
Playwright's baseURL in playwright.config.ts is used; apply the same change to
all other page.goto(...) usages in this file to centralize URL configuration.
In `@tests/e2e/mock.spec.ts`:
- Around line 3-4: Extract the duplicated BASE constant into a shared constant
(e.g., export const BASE) and update tests to import that constant rather than
redefining it; then update DETAIL_URL to derive from the imported BASE (replace
local BASE with the shared constant) or alternatively set Playwright's baseURL
in playwright.config.ts and reference it from tests, ensuring symbols DETAIL_URL
and BASE are updated to use the centralized value.
In `@tests/e2e/pages.spec.ts`:
- Around line 87-94: Remove the flaky page.waitForTimeout(1000) after
input.press('Enter') and instead wait for a concrete state: call await
page.waitForLoadState('networkidle') or await
expect(page.locator('form')).toBeVisible({ timeout: 5000 }) immediately after
input.press('Enter') so the test waits for the page/network/form readiness
rather than a fixed timeout; update the code around input.fill, input.press, and
the final expect(page.locator('form')).toBeVisible() accordingly and remove the
waitForTimeout call.
- Around line 39-43: The test currently skips interacting with sortBtn when it
doesn't exist, hiding failures; replace the conditional check with an explicit
assertion that the sort button exists (e.g., assert/count expectation on
sortBtn) before calling sortBtn.click(), then keep the subsequent await
page.waitForTimeout(300) and await noError(page) calls—this ensures failures are
surfaced rather than silently passing.
In `@tests/e2e/setup.ts`:
- Line 53: Replace the platform-unsafe use of process.env.HOME in the env object
setup with the cross-platform os.homedir() API: import homedir from 'os' (or use
require('os').homedir) at the top of tests/e2e/setup.ts and update the PATH
expression (currently `PATH:
`${process.env.HOME}/.cargo/bin:${process.env.PATH}``) to use `homedir()`
instead, preserving the existing PATH fallback (process.env.PATH) and any spread
of process.env.
In `@tests/e2e/torrent-detail.spec.ts`:
- Around line 83-89: Remove the fixed sleep (page.waitForTimeout) and instead
wait for the qBittorrent fetch to complete or the loading indicator to be
removed: replace the sleep after page.goto(DETAIL_URL) with either a
page.waitForResponse that matches the qBittorrent request URL/route or use the
existing locator '.loading-indicator' with hasText 'Loading qBittorrent data...'
and call locator.waitFor({ state: "detached" }) or waitFor({ state: "hidden" })
(or assert expect(locator).toHaveCount(0) immediately) so the test proceeds only
when the loading indicator is gone.
In `@tests/e2e/torrents.spec.ts`:
- Around line 77-91: The test 'sorting by title changes data order' uses fixed
page.waitForTimeout(500) after clicking the Title header; replace those sleeps
with condition-based waits: after calling titleSort.click() (the locator defined
as page.locator('.header button.link', { hasText: 'Title' })), wait for an
observable state change such as the loading indicator to appear and then
disappear (page.locator('.loading-indicator') appear/disappear) or wait for the
first row text to change by capturing
page.locator('.torrent-row').first().innerText() before the click and using
expect(...).not.toBe(...) / page.waitForFunction to detect the new value after
the click, then repeat for the second click; finally keep the existing
expect(page.locator('.error')).toHaveCount(0).
- Around line 14-21: The test "shows 35 torrents total" is brittle because it
asserts a hardcoded "35" in the page body; update the test to assert the torrent
count more resiliently by targeting the specific count element (replace the
generic body locator with a dedicated selector such as a data-testid like
"torrents-count" or a CSS selector for the count element) and assert its text
matches a numeric pattern (e.g., regex for one or more digits) after waiting for
the loading indicator ('.loading-indicator') to disappear; alternatively, if the
app doesn’t expose a stable selector yet, add a data-testid="torrents-count" to
the count component in the UI and use that in the test instead of expecting
"35".
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: e784cd20-d69d-4e96-9a61-57adf063044a
⛔ Files ignored due to path filters (26)
Cargo.lockis excluded by!**/*.lockpnpm-lock.yamlis excluded by!**/pnpm-lock.yamlserver/assets/icons/mediatypes/Audiobook.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/AudiobookF.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/AudiobookNF.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/Comics.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/ComicsF.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/ComicsNF.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/EBook.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/EBookF.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/EBookNF.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/Manga.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/MangaF.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/MangaNF.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/Music.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/MusicF.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/MusicNF.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/Periodicals.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/PeriodicalsAudio.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/PeriodicalsAudioF.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/PeriodicalsAudioNF.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/PeriodicalsF.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/PeriodicalsNF.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/Radio.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/RadioF.pngis excluded by!**/*.pngserver/assets/icons/mediatypes/RadioNF.pngis excluded by!**/*.png
📒 Files selected for processing (95)
.cargo/config.toml.cargo/sccache-wrapper.sh.gitignoreCargo.tomlDockerfilejustfilemlm_core/src/audiobookshelf.rsmlm_core/src/autograbber.rsmlm_db/src/impls/meta.rsmlm_mam/src/api.rsmlm_parse/Cargo.tomlmlm_parse/src/html.rsmlm_parse/src/lib.rsmlm_web_askama/src/lib.rsmlm_web_askama/src/pages/config.rsmlm_web_dioxus/.manifest.jsonmlm_web_dioxus/Cargo.tomlmlm_web_dioxus/Dioxus.tomlmlm_web_dioxus/assets/style.cssmlm_web_dioxus/src/app.rsmlm_web_dioxus/src/components/action_button.rsmlm_web_dioxus/src/components/details.rsmlm_web_dioxus/src/components/download_buttons.rsmlm_web_dioxus/src/components/filter_controls.rsmlm_web_dioxus/src/components/filter_link.rsmlm_web_dioxus/src/components/icons.rsmlm_web_dioxus/src/components/mod.rsmlm_web_dioxus/src/components/pagination.rsmlm_web_dioxus/src/components/query_params.rsmlm_web_dioxus/src/components/search_row.rsmlm_web_dioxus/src/components/sort_header.rsmlm_web_dioxus/src/components/status_message.rsmlm_web_dioxus/src/components/table_view.rsmlm_web_dioxus/src/components/task_box.rsmlm_web_dioxus/src/config.rsmlm_web_dioxus/src/dto.rsmlm_web_dioxus/src/duplicate/components.rsmlm_web_dioxus/src/duplicate/mod.rsmlm_web_dioxus/src/duplicate/server_fns.rsmlm_web_dioxus/src/duplicate/types.rsmlm_web_dioxus/src/error.rsmlm_web_dioxus/src/errors/components.rsmlm_web_dioxus/src/errors/mod.rsmlm_web_dioxus/src/errors/server_fns.rsmlm_web_dioxus/src/errors/types.rsmlm_web_dioxus/src/events/components.rsmlm_web_dioxus/src/events/mod.rsmlm_web_dioxus/src/events/server_fns.rsmlm_web_dioxus/src/events/types.rsmlm_web_dioxus/src/home.rsmlm_web_dioxus/src/lib.rsmlm_web_dioxus/src/list.rsmlm_web_dioxus/src/lists.rsmlm_web_dioxus/src/main.rsmlm_web_dioxus/src/replaced/components.rsmlm_web_dioxus/src/replaced/mod.rsmlm_web_dioxus/src/replaced/server_fns.rsmlm_web_dioxus/src/replaced/types.rsmlm_web_dioxus/src/search.rsmlm_web_dioxus/src/selected/components.rsmlm_web_dioxus/src/selected/mod.rsmlm_web_dioxus/src/selected/query.rsmlm_web_dioxus/src/selected/server_fns.rsmlm_web_dioxus/src/selected/types.rsmlm_web_dioxus/src/sse.rsmlm_web_dioxus/src/ssr.rsmlm_web_dioxus/src/torrent_detail/components.rsmlm_web_dioxus/src/torrent_detail/mod.rsmlm_web_dioxus/src/torrent_detail/server_fns.rsmlm_web_dioxus/src/torrent_detail/types.rsmlm_web_dioxus/src/torrent_edit.rsmlm_web_dioxus/src/torrents/components.rsmlm_web_dioxus/src/torrents/mod.rsmlm_web_dioxus/src/torrents/query.rsmlm_web_dioxus/src/torrents/server_fns.rsmlm_web_dioxus/src/torrents/types.rsmlm_web_dioxus/src/utils.rspackage.jsonplaywright.config.tsserver/Cargo.tomlserver/assets/style.cssserver/src/bin/create_test_db.rsserver/src/bin/mock_server.rsserver/src/main.rsserver/tests/dioxus/mod.rsserver/tests/linker_folder_test.rsserver/tests/linker_torrent_test.rsserver/tests/metadata_integration.rstests/e2e/config.spec.tstests/e2e/mock.spec.tstests/e2e/pages.spec.tstests/e2e/setup.tstests/e2e/teardown.tstests/e2e/torrent-detail.spec.tstests/e2e/torrents.spec.ts
| [target.x86_64-unknown-linux-gnu] | ||
| linker = "clang" | ||
| rustflags = ["-C", "link-arg=-fuse-ld=mold"] | ||
|
|
||
| [target.aarch64-unknown-linux-gnu] | ||
| linker = "clang" | ||
| rustflags = ["-C", "link-arg=-fuse-ld=mold"] |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Document required toolchain dependencies.
This configuration requires clang and mold to be installed on Linux systems. Contributors without these tools will encounter linker errors. Consider adding a note in the README or a comment in this file about the prerequisites, or make these settings conditional (e.g., via environment-based .cargo/config.toml overrides).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.cargo/config.toml around lines 5 - 11, Add documentation or a conditional
fallback for the custom linker settings so contributors aren't blocked by
missing tools: either add a README note (or a comment in this file) stating that
the configuration under the [target.x86_64-unknown-linux-gnu] and
[target.aarch64-unknown-linux-gnu] sections requires clang and mold to be
installed, or change .cargo/config.toml to make the linker/rustflags conditional
(e.g., read an environment variable or check existence of clang/mold before
setting linker = "clang" and rustflags = ["-C", "link-arg=-fuse-ld=mold"]) so
builds fall back to the system default linker when those tools are absent.
tests/e2e/config.spec.ts
Outdated
| @@ -0,0 +1,37 @@ | |||
| import { test, expect } from '@playwright/test'; | |||
|
|
|||
| const BASE = 'http://localhost:3998'; | |||
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider using Playwright's baseURL from config instead of hardcoding.
The baseURL is already configured in playwright.config.ts as http://localhost:3998. Using page.goto('/config') instead of the full URL would leverage that config and make URL changes easier to manage centrally.
♻️ Suggested refactor
-const BASE = 'http://localhost:3998';
-
test.describe('Config page', () => {
test('legacy /config route redirects to Dioxus config page', async ({ page }) => {
- await page.goto(`${BASE}/config`);
+ await page.goto('/config');Apply similar changes to other page.goto() calls in this file.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/e2e/config.spec.ts` at line 3, Replace the hardcoded BASE constant and
full-URL uses in this test with Playwright's configured baseURL: remove or stop
using the const BASE = 'http://localhost:3998' and change any page.goto calls
that use BASE (e.g., page.goto(`${BASE}/config`)) to relative paths like
page.goto('/config') so Playwright's baseURL in playwright.config.ts is used;
apply the same change to all other page.goto(...) usages in this file to
centralize URL configuration.
tests/e2e/mock.spec.ts
Outdated
| const BASE = 'http://localhost:3998'; | ||
| const DETAIL_URL = `${BASE}/dioxus/torrents/torrent-001`; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider extracting shared constants to avoid duplication.
BASE URL is duplicated across multiple test files (mock.spec.ts, pages.spec.ts). Consider extracting to a shared constants file or leveraging Playwright's baseURL configuration in playwright.config.ts.
♻️ Example shared constants approach
// tests/e2e/constants.ts
export const BASE = 'http://localhost:3998';
export const MOCK_URL = 'http://localhost:3997';Then import in test files:
import { BASE } from './constants';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/e2e/mock.spec.ts` around lines 3 - 4, Extract the duplicated BASE
constant into a shared constant (e.g., export const BASE) and update tests to
import that constant rather than redefining it; then update DETAIL_URL to derive
from the imported BASE (replace local BASE with the shared constant) or
alternatively set Playwright's baseURL in playwright.config.ts and reference it
from tests, ensuring symbols DETAIL_URL and BASE are updated to use the
centralized value.
tests/e2e/pages.spec.ts
Outdated
| if (await sortBtn.count() > 0) { | ||
| await sortBtn.click(); | ||
| await page.waitForTimeout(300); | ||
| await noError(page); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Avoid conditional test logic that may mask failures.
The if (await sortBtn.count() > 0) pattern silently passes if the sort button doesn't exist, which could hide regressions. Consider asserting the button exists before interacting:
♻️ Proposed fix
test('sort header is interactive', async ({ page }) => {
await page.goto(`${BASE}/dioxus/errors`);
await noLoading(page);
const sortBtn = page.locator('.header button.link').first();
- if (await sortBtn.count() > 0) {
- await sortBtn.click();
- await page.waitForTimeout(300);
- await noError(page);
- }
+ await expect(sortBtn).toBeVisible();
+ await sortBtn.click();
+ // Wait for sort to complete by checking for stable state
+ await noLoading(page);
+ await noError(page);
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (await sortBtn.count() > 0) { | |
| await sortBtn.click(); | |
| await page.waitForTimeout(300); | |
| await noError(page); | |
| } | |
| await expect(sortBtn).toBeVisible(); | |
| await sortBtn.click(); | |
| // Wait for sort to complete by checking for stable state | |
| await noLoading(page); | |
| await noError(page); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/e2e/pages.spec.ts` around lines 39 - 43, The test currently skips
interacting with sortBtn when it doesn't exist, hiding failures; replace the
conditional check with an explicit assertion that the sort button exists (e.g.,
assert/count expectation on sortBtn) before calling sortBtn.click(), then keep
the subsequent await page.waitForTimeout(300) and await noError(page) calls—this
ensures failures are surfaced rather than silently passing.
tests/e2e/pages.spec.ts
Outdated
| const input = page.locator('input[type="text"], input[type="search"]').first(); | ||
| if (await input.count() > 0) { | ||
| await input.fill('Test Book'); | ||
| await input.press('Enter'); | ||
| await page.waitForTimeout(1000); | ||
| // Page should still be functional (no JS crash) | ||
| await expect(page.locator('form')).toBeVisible(); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Replace waitForTimeout with explicit state assertions.
Using waitForTimeout(1000) creates flaky tests that may fail on slower systems or pass despite bugs on faster systems. Prefer waiting for a specific state change.
♻️ Proposed fix
test('can search and shows results', async ({ page }) => {
await page.goto(`${BASE}/dioxus/search`);
await expect(page.locator('form')).toBeVisible();
const input = page.locator('input[type="text"], input[type="search"]').first();
- if (await input.count() > 0) {
- await input.fill('Test Book');
- await input.press('Enter');
- await page.waitForTimeout(1000);
- // Page should still be functional (no JS crash)
- await expect(page.locator('form')).toBeVisible();
- }
+ await expect(input).toBeVisible();
+ await input.fill('Test Book');
+ await input.press('Enter');
+ // Wait for search to complete
+ await noLoading(page);
+ // Page should still be functional (no JS crash)
+ await expect(page.locator('form')).toBeVisible();
});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/e2e/pages.spec.ts` around lines 87 - 94, Remove the flaky
page.waitForTimeout(1000) after input.press('Enter') and instead wait for a
concrete state: call await page.waitForLoadState('networkidle') or await
expect(page.locator('form')).toBeVisible({ timeout: 5000 }) immediately after
input.press('Enter') so the test waits for the page/network/form readiness
rather than a fixed timeout; update the code around input.fill, input.press, and
the final expect(page.locator('form')).toBeVisible() accordingly and remove the
waitForTimeout call.
tests/e2e/setup.ts
Outdated
| execSync('dx build --fullstack --skip-assets', { | ||
| cwd: resolve(ROOT, 'mlm_web_dioxus'), | ||
| stdio: 'inherit', | ||
| env: { ...process.env, PATH: `${process.env.HOME}/.cargo/bin:${process.env.PATH}` }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if there's a globalTeardown that cleans up the spawned processes
rg -l 'E2E_MOCK_PID|E2E_SERVER_PID' tests/Repository: StirlingMouse/MLM
Length of output: 101
🏁 Script executed:
cat -n tests/e2e/teardown.tsRepository: StirlingMouse/MLM
Length of output: 568
🏁 Script executed:
cat -n tests/e2e/setup.ts | head -100Repository: StirlingMouse/MLM
Length of output: 4484
🏁 Script executed:
rg 'process\.env\.HOME' -A 2 -B 2Repository: StirlingMouse/MLM
Length of output: 398
Use homedir() from os module for cross-platform home directory access.
process.env.HOME is undefined on Windows. Use homedir() from the Node.js os module instead:
♻️ Proposed fix
import { execSync, spawn } from 'child_process';
import { existsSync, readdirSync } from 'fs';
+import { homedir } from 'os';
import { resolve } from 'path';
// Build WASM if not present
if (!wasmExists()) {
console.log('[e2e] Building WASM...');
execSync('dx build --fullstack --skip-assets', {
cwd: resolve(ROOT, 'mlm_web_dioxus'),
stdio: 'inherit',
- env: { ...process.env, PATH: `${process.env.HOME}/.cargo/bin:${process.env.PATH}` },
+ env: { ...process.env, PATH: `${homedir()}/.cargo/bin:${process.env.PATH}` },
});
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| env: { ...process.env, PATH: `${process.env.HOME}/.cargo/bin:${process.env.PATH}` }, | |
| import { execSync, spawn } from 'child_process'; | |
| import { existsSync, readdirSync } from 'fs'; | |
| import { homedir } from 'os'; | |
| import { resolve } from 'path'; | |
| // Build WASM if not present | |
| if (!wasmExists()) { | |
| console.log('[e2e] Building WASM...'); | |
| execSync('dx build --fullstack --skip-assets', { | |
| cwd: resolve(ROOT, 'mlm_web_dioxus'), | |
| stdio: 'inherit', | |
| env: { ...process.env, PATH: `${homedir()}/.cargo/bin:${process.env.PATH}` }, | |
| }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/e2e/setup.ts` at line 53, Replace the platform-unsafe use of
process.env.HOME in the env object setup with the cross-platform os.homedir()
API: import homedir from 'os' (or use require('os').homedir) at the top of
tests/e2e/setup.ts and update the PATH expression (currently `PATH:
`${process.env.HOME}/.cargo/bin:${process.env.PATH}``) to use `homedir()`
instead, preserving the existing PATH fallback (process.env.PATH) and any spread
of process.env.
| native_db = { git = "https://github.com/StirlingMouse/native_db.git", branch = "0.8.x", optional = true } | ||
| sublime_fuzzy = { version = "0.7", optional = true } | ||
| qbit = { git = "https://github.com/StirlingMouse/qbittorrent-webui-api.git", optional = true } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Pin the git dependencies to exact revisions.
native_db follows a branch tip and qbit follows the repository default branch. That makes fresh lockfile updates pull unreviewed upstream commits and breaks build reproducibility for this crate. Use the currently locked SHAs from Cargo.lock as rev = ... values before release.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_web_dioxus/Cargo.toml` around lines 47 - 49, The git dependencies
native_db and qbit in Cargo.toml are tracking branches which breaks reproducible
builds; replace the branch/default-branch pins with exact commit SHAs using rev
= "<sha>" (take the exact SHAs from the current Cargo.lock) so native_db uses
rev instead of branch = "0.8.x" and qbit uses rev instead of the implicit
branch; update Cargo.lock by running cargo update -p native_db --precise <sha>
and cargo update -p qbit --precise <sha> (or run a full cargo build) to ensure
the lockfile matches the new revs before releasing.
mlm_web_dioxus/src/torrents/mod.rs
Outdated
| pub use components::TorrentsPage; | ||
| pub use server_fns::*; | ||
| pub use types::*; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find what's publicly exported from server_fns and types to determine explicit re-exports
echo "=== server_fns exports ==="
rg -n "^pub (fn|async fn|struct|enum|type)" mlm_web_dioxus/src/torrents/server_fns.rs | head -20
echo "=== types exports ==="
rg -n "^pub (struct|enum|type)" mlm_web_dioxus/src/torrents/types.rs | head -20Repository: StirlingMouse/MLM
Length of output: 443
🏁 Script executed:
#!/bin/bash
# Check re-export patterns in events/mod.rs and selected/mod.rs
echo "=== events/mod.rs re-exports ==="
fd -p "events/mod.rs" mlm_web_dioxus/src/torrents/ -x cat {}
echo ""
echo "=== selected/mod.rs re-exports ==="
fd -p "selected/mod.rs" mlm_web_dioxus/src/torrents/ -x cat {}
echo ""
echo "=== Current torrents/mod.rs ==="
cat mlm_web_dioxus/src/torrents/mod.rsRepository: StirlingMouse/MLM
Length of output: 292
🏁 Script executed:
#!/bin/bash
# Search for events and selected modules in the torrents directory structure
echo "=== Structure of torrents directory ==="
find mlm_web_dioxus/src/torrents -type f -name "*.rs" | sort
echo ""
echo "=== Search for events module ==="
rg "mod events" mlm_web_dioxus/src/torrents/
echo ""
echo "=== Search for selected module ==="
rg "mod selected" mlm_web_dioxus/src/torrents/Repository: StirlingMouse/MLM
Length of output: 363
🏁 Script executed:
#!/bin/bash
# Search entire codebase for events and selected module declarations
echo "=== Search for 'mod events' across entire src ==="
rg "mod events" mlm_web_dioxus/src/
echo ""
echo "=== Search for 'mod selected' across entire src ==="
rg "mod selected" mlm_web_dioxus/src/
echo ""
echo "=== Search for 'events/mod.rs' references ==="
rg "events/mod" mlm_web_dioxus/src/
echo ""
echo "=== Search for 'selected/mod.rs' references ==="
rg "selected/mod" mlm_web_dioxus/src/Repository: StirlingMouse/MLM
Length of output: 345
🏁 Script executed:
#!/bin/bash
# Check the re-export patterns in the actual events and selected modules
echo "=== src/events/mod.rs ==="
cat mlm_web_dioxus/src/events/mod.rs
echo ""
echo "=== src/selected/mod.rs ==="
cat mlm_web_dioxus/src/selected/mod.rsRepository: StirlingMouse/MLM
Length of output: 785
🏁 Script executed:
#!/bin/bash
# Check what's imported from the torrents module across the codebase
echo "=== Imports from torrents module ==="
rg "use.*torrents::" mlm_web_dioxus/src/ -A 2 | head -40Repository: StirlingMouse/MLM
Length of output: 271
Align re-export pattern with events and selected modules for consistency.
This module uses glob re-exports (pub use server_fns::*; and pub use types::*;) while the top-level events/mod.rs and selected/mod.rs modules use explicit re-exports. For consistency across the codebase, consider replacing the glob re-exports with explicit ones listing only the items intended to be part of the public API (e.g., get_torrents_data, apply_torrents_action, and the primary types like TorrentsData, TorrentsPageFilter, and TorrentsPageSort).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_web_dioxus/src/torrents/mod.rs` around lines 6 - 8, Replace the glob
re-exports in this module (currently `pub use server_fns::*;` and `pub use
types::*;`) with explicit exports to match the pattern used in `events` and
`selected`; for example, export `get_torrents_data` and `apply_torrents_action`
from the `server_fns` module and export the main types `TorrentsData`,
`TorrentsPageFilter`, and `TorrentsPageSort` from the `types` module (keeping
`pub use components::TorrentsPage;` as-is), so only the intended public API is
re-exported instead of everything in those modules.
| TorrentsPageSort::UploadedAt => a.meta.uploaded_at.cmp(&b.meta.uploaded_at), | ||
| }; | ||
| if asc { ord.reverse() } else { ord } | ||
| }); | ||
| } else if query.is_some() { | ||
| filtered_torrents.sort_by_key(|(_, score)| -*score); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for how `asc` is used/set in the UI to verify expected behavior
rg -n -A3 -B3 '\basc\b' --type rust mlm_web_dioxus/src/torrents/Repository: StirlingMouse/MLM
Length of output: 15128
🏁 Script executed:
#!/bin/bash
# Find SortHeader component definition and its click handler
rg -n "fn SortHeader\|component.*SortHeader" --type rust mlm_web_dioxus/src/torrents/ -A 30Repository: StirlingMouse/MLM
Length of output: 43
🏁 Script executed:
#!/bin/bash
# Broader search for SortHeader definition
rg -n "SortHeader" --type rust mlm_web_dioxus/src/torrents/components.rs | head -20Repository: StirlingMouse/MLM
Length of output: 1295
🏁 Script executed:
#!/bin/bash
# Check the imports at the top of components.rs
head -20 mlm_web_dioxus/src/torrents/components.rsRepository: StirlingMouse/MLM
Length of output: 706
🏁 Script executed:
#!/bin/bash
# Find SortHeader definition in components module
find mlm_web_dioxus/src/components -name "*.rs" -exec grep -l "SortHeader" {} \;Repository: StirlingMouse/MLM
Length of output: 142
🏁 Script executed:
#!/bin/bash
# Read the SortHeader component
cat mlm_web_dioxus/src/components/sort_header.rsRepository: StirlingMouse/MLM
Length of output: 1035
Sort direction logic is inverted—arrow indicators contradict actual sort order.
The code at line 208 reverses the order when asc=true, but the SortHeader component displays "↑" when asc=true (indicating ascending). This creates a mismatch:
- When user sees "↑" (ascending arrow), they get Z-A results
- When user sees "↓" (descending arrow), they get A-Z results
The fix is to remove the inversion:
Fix
- if asc { ord.reverse() } else { ord }
+ if asc { ord } else { ord.reverse() }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| TorrentsPageSort::UploadedAt => a.meta.uploaded_at.cmp(&b.meta.uploaded_at), | |
| }; | |
| if asc { ord.reverse() } else { ord } | |
| }); | |
| } else if query.is_some() { | |
| filtered_torrents.sort_by_key(|(_, score)| -*score); | |
| } | |
| TorrentsPageSort::UploadedAt => a.meta.uploaded_at.cmp(&b.meta.uploaded_at), | |
| }; | |
| if asc { ord } else { ord.reverse() } | |
| }); | |
| } else if query.is_some() { | |
| filtered_torrents.sort_by_key(|(_, score)| -*score); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_web_dioxus/src/torrents/server_fns.rs` around lines 206 - 212, The sort
direction is inverted: in the closure used to sort filtered_torrents (the match
on TorrentsPageSort and subsequent if asc { ord.reverse() } else { ord }),
remove the inversion so that when asc is true the comparator returns ascending
order and when false it returns descending; update the sorting logic in the
sort_by / sort_by_key block that references TorrentsPageSort and the asc
variable (and the filtered_torrents.sort_by_key(|(_, score)| ...) path if
needed) to ensure the SortHeader's "↑" maps to ascending results and "↓" to
descending.
tests/e2e/torrent-detail.spec.ts
Outdated
| await page.goto(DETAIL_URL); | ||
| await page.waitForTimeout(5_000); // give client-side fetches time to complete | ||
|
|
||
| // qbit loading indicator must be gone (either data or nothing rendered) | ||
| await expect( | ||
| page.locator('.loading-indicator', { hasText: 'Loading qBittorrent data...' }) | ||
| ).toHaveCount(0); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Drop the fixed sleep in this test.
page.waitForTimeout(5_000) is explicitly discouraged by Playwright because time-based waits are flaky; wait on the qBittorrent response or the final DOM state directly instead. (playwright.dev)
Suggested change
test('qbit section is not stuck loading', async ({ page }) => {
+ const qbitResponse = page.waitForResponse(
+ res =>
+ res.request().method() === 'POST' &&
+ res.url().includes('/api/get_qbit_data') &&
+ res.status() === 200,
+ { timeout: 20_000 }
+ );
await page.goto(DETAIL_URL);
- await page.waitForTimeout(5_000); // give client-side fetches time to complete
+ await qbitResponse;
// qbit loading indicator must be gone (either data or nothing rendered)
await expect(
page.locator('.loading-indicator', { hasText: 'Loading qBittorrent data...' })
- ).toHaveCount(0);
+ ).toHaveCount(0, { timeout: 20_000 });
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| await page.goto(DETAIL_URL); | |
| await page.waitForTimeout(5_000); // give client-side fetches time to complete | |
| // qbit loading indicator must be gone (either data or nothing rendered) | |
| await expect( | |
| page.locator('.loading-indicator', { hasText: 'Loading qBittorrent data...' }) | |
| ).toHaveCount(0); | |
| const qbitResponse = page.waitForResponse( | |
| res => | |
| res.request().method() === 'POST' && | |
| res.url().includes('/api/get_qbit_data') && | |
| res.status() === 200, | |
| { timeout: 20_000 } | |
| ); | |
| await page.goto(DETAIL_URL); | |
| await qbitResponse; | |
| // qbit loading indicator must be gone (either data or nothing rendered) | |
| await expect( | |
| page.locator('.loading-indicator', { hasText: 'Loading qBittorrent data...' }) | |
| ).toHaveCount(0, { timeout: 20_000 }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/e2e/torrent-detail.spec.ts` around lines 83 - 89, Remove the fixed
sleep (page.waitForTimeout) and instead wait for the qBittorrent fetch to
complete or the loading indicator to be removed: replace the sleep after
page.goto(DETAIL_URL) with either a page.waitForResponse that matches the
qBittorrent request URL/route or use the existing locator '.loading-indicator'
with hasText 'Loading qBittorrent data...' and call locator.waitFor({ state:
"detached" }) or waitFor({ state: "hidden" }) (or assert
expect(locator).toHaveCount(0) immediately) so the test proceeds only when the
loading indicator is gone.
| test('shows 35 torrents total', async ({ page }) => { | ||
| await page.goto(`${BASE}/dioxus/torrents`); | ||
| await page.waitForSelector('h1'); | ||
| // Wait for data, not loading | ||
| await expect(page.locator('.loading-indicator')).toHaveCount(0); | ||
| // Verify total count text appears somewhere (e.g., "35" in the page) | ||
| await expect(page.locator('body')).toContainText('35'); | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Hardcoded test data expectation may cause brittleness.
Line 20 expects exactly "35" torrents in the body text. If test data changes, this test will fail. Consider either:
- Making the assertion more flexible (e.g., checking for any number)
- Documenting the expected test fixture
- Using a data-testid attribute for the count element
📝 More resilient assertion example
- // Verify total count text appears somewhere (e.g., "35" in the page)
- await expect(page.locator('body')).toContainText('35');
+ // Verify total count is displayed (adjust expected value based on test fixtures)
+ const countText = page.locator('[data-testid="total-count"], .total-count');
+ await expect(countText).toBeVisible();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/e2e/torrents.spec.ts` around lines 14 - 21, The test "shows 35 torrents
total" is brittle because it asserts a hardcoded "35" in the page body; update
the test to assert the torrent count more resiliently by targeting the specific
count element (replace the generic body locator with a dedicated selector such
as a data-testid like "torrents-count" or a CSS selector for the count element)
and assert its text matches a numeric pattern (e.g., regex for one or more
digits) after waiting for the loading indicator ('.loading-indicator') to
disappear; alternatively, if the app doesn’t expose a stable selector yet, add a
data-testid="torrents-count" to the count component in the UI and use that in
the test instead of expecting "35".
| test('sorting by title changes data order', async ({ page }) => { | ||
| await page.goto(`${BASE}/dioxus/torrents`); | ||
| await expect(page.locator('.loading-indicator')).toHaveCount(0); | ||
|
|
||
| // Click the Title sort header button | ||
| const titleSort = page.locator('.header button.link', { hasText: 'Title' }); | ||
| if (await titleSort.count() > 0) { | ||
| await titleSort.click(); | ||
| await page.waitForTimeout(500); | ||
| // Click again to reverse sort | ||
| await titleSort.click(); | ||
| await page.waitForTimeout(500); | ||
| await expect(page.locator('.error')).toHaveCount(0); | ||
| } | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Avoid fixed waitForTimeout - use condition-based waits instead.
Lines 85 and 88 use page.waitForTimeout(500) which is flaky and slows tests unnecessarily. Prefer waiting for observable state changes.
♻️ Proposed fix to use condition-based waits
if (await titleSort.count() > 0) {
await titleSort.click();
- await page.waitForTimeout(500);
+ // Wait for sort to be applied (URL change or data reload)
+ await expect(page).toHaveURL(/sort=|asc=/);
// Click again to reverse sort
await titleSort.click();
- await page.waitForTimeout(500);
+ await expect(page).toHaveURL(/sort=|asc=/);
await expect(page.locator('.error')).toHaveCount(0);
}Alternatively, wait for the loading indicator to appear and disappear, or for row content to change.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| test('sorting by title changes data order', async ({ page }) => { | |
| await page.goto(`${BASE}/dioxus/torrents`); | |
| await expect(page.locator('.loading-indicator')).toHaveCount(0); | |
| // Click the Title sort header button | |
| const titleSort = page.locator('.header button.link', { hasText: 'Title' }); | |
| if (await titleSort.count() > 0) { | |
| await titleSort.click(); | |
| await page.waitForTimeout(500); | |
| // Click again to reverse sort | |
| await titleSort.click(); | |
| await page.waitForTimeout(500); | |
| await expect(page.locator('.error')).toHaveCount(0); | |
| } | |
| }); | |
| test('sorting by title changes data order', async ({ page }) => { | |
| await page.goto(`${BASE}/dioxus/torrents`); | |
| await expect(page.locator('.loading-indicator')).toHaveCount(0); | |
| // Click the Title sort header button | |
| const titleSort = page.locator('.header button.link', { hasText: 'Title' }); | |
| if (await titleSort.count() > 0) { | |
| await titleSort.click(); | |
| // Wait for sort to be applied (URL change or data reload) | |
| await expect(page).toHaveURL(/sort=|asc=/); | |
| // Click again to reverse sort | |
| await titleSort.click(); | |
| await expect(page).toHaveURL(/sort=|asc=/); | |
| await expect(page.locator('.error')).toHaveCount(0); | |
| } | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/e2e/torrents.spec.ts` around lines 77 - 91, The test 'sorting by title
changes data order' uses fixed page.waitForTimeout(500) after clicking the Title
header; replace those sleeps with condition-based waits: after calling
titleSort.click() (the locator defined as page.locator('.header button.link', {
hasText: 'Title' })), wait for an observable state change such as the loading
indicator to appear and then disappear (page.locator('.loading-indicator')
appear/disappear) or wait for the first row text to change by capturing
page.locator('.torrent-row').first().innerText() before the click and using
expect(...).not.toBe(...) / page.waitForFunction to detect the new value after
the click, then repeat for the second click; finally keep the existing
expect(page.locator('.error')).toHaveCount(0).
mlm_web_dioxus/src/list.rs
Outdated
| let items: Vec<ListItem> = db | ||
| .r_transaction() | ||
| .map_err(|e| ServerFnError::new(e.to_string()))? | ||
| .scan() | ||
| .secondary::<ListItem>(ListItemKey::created_at) | ||
| .map_err(|e| ServerFnError::new(e.to_string()))? | ||
| .all() | ||
| .map_err(|e| ServerFnError::new(e.to_string()))? | ||
| .collect::<Result<Vec<_>, _>>() | ||
| .map_err(|e| ServerFnError::new(e.to_string()))? | ||
| .into_iter() | ||
| .filter(|item| item.list_id == list.id) | ||
| .filter(|item| { | ||
| if let Some(ref show) = show { | ||
| match show.as_str() { | ||
| "any" => item_wants_audio(item) || item_wants_ebook(item), | ||
| "audio" => item_wants_audio(item), | ||
| "ebook" => item_wants_ebook(item), | ||
| _ => true, | ||
| } | ||
| } else { | ||
| true | ||
| } | ||
| }) | ||
| .collect(); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if ListItem has a secondary index on list_id that could be used for filtering
ast-grep --pattern $'impl ListItem {
$$$
}'
# Also check the ListItemKey definition
rg -nA10 'ListItemKey' --type rustRepository: StirlingMouse/MLM
Length of output: 6703
🏁 Script executed:
# Find the v18 module definition
fd 'v18' --type f --extension rs | head -20Repository: StirlingMouse/MLM
Length of output: 78
🏁 Script executed:
# Search for ListItemKey definition and enum variants
rg -n 'enum ListItemKey|struct ListItemKey' --type rust -A 20Repository: StirlingMouse/MLM
Length of output: 43
🏁 Script executed:
# Look for secondary index definitions or patterns in the codebase
rg -n 'secondary.*ListItem' --type rust -B 5 -A 5Repository: StirlingMouse/MLM
Length of output: 1553
🏁 Script executed:
# Read the v18.rs file to find ListItemKey definition
cat -n mlm_db/src/v18.rs | head -100Repository: StirlingMouse/MLM
Length of output: 3875
🏁 Script executed:
# Search for ListItemKey in v18.rs
rg -n 'ListItemKey' mlm_db/src/v18.rs -A 10Repository: StirlingMouse/MLM
Length of output: 43
🏁 Script executed:
# Also check if there's a list_id index available
rg -n 'list_id' mlm_db/src/v18.rs | head -20Repository: StirlingMouse/MLM
Length of output: 125
🏁 Script executed:
# Find ListItem struct definition with all its fields and keys
rg -n 'struct ListItem' mlm_db/src/v18.rs -A 50 | head -80Repository: StirlingMouse/MLM
Length of output: 2710
🏁 Script executed:
# Look for the ListItemKey enum definition
rg -n 'enum ListItemKey' mlm_db/src/v18.rs -A 20Repository: StirlingMouse/MLM
Length of output: 43
🏁 Script executed:
# Search for ListItemKey export - check what keys are available
rg -n 'pub enum ListItemKey' mlm_db/src/v18.rs -A 10Repository: StirlingMouse/MLM
Length of output: 43
🏁 Script executed:
# Check if there are any uses of ListItemKey with list_id in the codebase
rg -n 'ListItemKey::list_id' --type rustRepository: StirlingMouse/MLM
Length of output: 43
🏁 Script executed:
# Check the native_db crate to understand how secondary keys work
rg -n 'native_db' mlm_db/src/lib.rs | head -10Repository: StirlingMouse/MLM
Length of output: 202
🏁 Script executed:
# Look for the actual usage pattern with .secondary in the codebase
rg -n '\.secondary:<' --type rust -B 2 -A 2 | head -40Repository: StirlingMouse/MLM
Length of output: 43
🏁 Script executed:
# Check if ListItem has the native_db macro with export_keys
rg -n 'pub struct ListItem' mlm_db/src/v18.rs -B 5 -A 1Repository: StirlingMouse/MLM
Length of output: 504
🏁 Script executed:
# Check how native_db secondary keys are actually queried in the codebase
rg -n '\.secondary:' --type rust -A 5 | head -60Repository: StirlingMouse/MLM
Length of output: 4520
🏁 Script executed:
# Look for examples of filtering by secondary_key value
rg -n 'list_id' mlm_web_dioxus/src/list.rs -B 5 -A 5Repository: StirlingMouse/MLM
Length of output: 4999
🏁 Script executed:
# Check native_db documentation or usage patterns
rg -n 'get\(\)' mlm_db/src --type rust -A 3 | head -40Repository: StirlingMouse/MLM
Length of output: 43
🏁 Script executed:
# Check how secondary keys with values are queried in native_db patterns
rg -n 'ListItemKey::' --type rustRepository: StirlingMouse/MLM
Length of output: 244
🏁 Script executed:
# Look for the full implementation context in list.rs to see the entire function
sed -n '64,120p' mlm_web_dioxus/src/list.rsRepository: StirlingMouse/MLM
Length of output: 2082
🏁 Script executed:
# Check the native_db macros and if secondary keys generate enum variants automatically
# Look at how TorrentKey is structured
rg -n 'TorrentKey' mlm_db/src/v18.rs | head -20Repository: StirlingMouse/MLM
Length of output: 43
🏁 Script executed:
# Check if there's a pattern of using secondary keys with values in the codebase
rg -n 'secondary:.*Some\(' --type rust -B 2 -A 2 | head -30Repository: StirlingMouse/MLM
Length of output: 1716
🏁 Script executed:
# Check if ListItemKey supports list_id variant like TorrentKey supports mam_id
rg -n 'ListItemKey' mlm_db/src --type rust -A 3 | head -40Repository: StirlingMouse/MLM
Length of output: 328
🏁 Script executed:
# Check how native_db generates key enums from #[secondary_key] attributes
# Look for documentation or patterns in the v18.rs file
sed -n '379,402p' mlm_db/src/v18.rsRepository: StirlingMouse/MLM
Length of output: 800
Use list_id secondary key to filter at the database level instead of scanning all items.
The current implementation scans all ListItem records using the created_at secondary index and then filters by list_id in memory. Since ListItem has a list_id secondary key available (defined in the struct), use it directly in the query:
.secondary::<ListItem>(ListItemKey::list_id, Some(list.id))This avoids the full table scan and applies the filter at the database level, improving performance as the number of list items grows.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mlm_web_dioxus/src/list.rs` around lines 86 - 110, The code is scanning all
ListItem records using the created_at secondary and then filtering by list.id in
memory; replace the DB-level scan with a secondary using the list_id key so the
DB filters by list first. In the chain starting from
db.r_transaction()...scan(), call .secondary::<ListItem>(ListItemKey::list_id,
Some(list.id)) instead of .secondary::<ListItem>(ListItemKey::created_at),
remove the in-memory .filter(|item| item.list_id == list.id), and keep the
remaining show-based filters (item_wants_audio, item_wants_ebook) and collection
logic unchanged so only matching list items are retrieved from the database.
Torrent detail tests
1b713a5 to
d86ec05
Compare
333eade to
848745d
Compare
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
Move sanitized HTML into server-side DTO mapping so the web build no longer needs mlm_parse in client code. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores
Stack