refactor: cleanup DashSpvClient to have single run() entry point#457
refactor: cleanup DashSpvClient to have single run() entry point#457xdustinface merged 1 commit intov0.42-devfrom
DashSpvClient to have single run() entry point#457Conversation
📝 WalkthroughWalkthroughThis PR consolidates the client initialization API by removing the standalone Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
ba63b6a to
1d325f3
Compare
|
This PR has merge conflicts with the base branch. Please rebase or merge the base branch into your branch to resolve them. |
1d325f3 to
de756b8
Compare
de756b8 to
4f782c0
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (7)
dash-spv-ffi/scripts/generate_ffi_docs.py (1)
145-146:⚠️ Potential issue | 🟡 Minor
dash_spv_ffi_client_runwill be miscategorized as "Utility Functions" instead of "Client Management".The condition checks for
client_startbut notclient_run. Since this PR removedclient_startand replaced it withclient_run, the new function won't match this filter.Proposed fix
- if 'client_new' in name or 'client_start' in name or 'client_stop' in name or 'client_destroy' in name: + if 'client_new' in name or 'client_run' in name or 'client_stop' in name or 'client_destroy' in name:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/scripts/generate_ffi_docs.py` around lines 145 - 146, The categorization check in generate_ffi_docs.py currently looks for 'client_new', 'client_start', 'client_stop', and 'client_destroy' and will miss the renamed function dash_spv_ffi_client_run; update the condition that appends to categories['Client Management'] to include 'client_run' (or better, match on the 'client_' prefix with the set of management verbs) so that functions like dash_spv_ffi_client_run are classified under "Client Management" (refer to the if condition that currently checks for 'client_new'/'client_start'/'client_stop'/'client_destroy' and the categories['Client Management'].append(func) call).dash-spv-ffi/tests/unit/test_client_lifecycle.rs (1)
67-67:⚠️ Potential issue | 🟡 MinorStale comment still references
client_start.The
#[ignore]comments on lines 67, 111, and 185 still say "client_start hangs without peers" but the function is nowdash_spv_ffi_client_run. Update for consistency.Suggested fix
- #[ignore] // Requires network - client_start hangs without peers + #[ignore] // Requires network - client_run hangs without peersApply similarly on lines 111 and 185.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/tests/unit/test_client_lifecycle.rs` at line 67, Update the stale ignore comment text so it references the current function name: replace "client_start hangs without peers" with "dash_spv_ffi_client_run hangs without peers" in the three #[ignore] attributes in this test file (the ones currently annotating the tests around the client lifecycle checks), and apply the same replacement for the other two occurrences mentioned (the comments near the other #[ignore] attributes).dash-spv/tests/wallet_integration_test.rs (1)
22-22:⚠️ Potential issue | 🟠 MajorFix TempDir lifetime issue that may cause flaky tests.
TempDir::new().unwrap().path()creates a temporary that is dropped at the end of the statement, deleting the directory beforeDiskStorageManager::new()tries to create directories and open storage files. This can cause test failures depending on timing.Hold the
TempDirin a variable to keep it alive:Suggested fix
+ let temp_dir = TempDir::new().unwrap(); let config = ClientConfig::testnet() .without_filters() - .with_storage_path(TempDir::new().unwrap().path()) + .with_storage_path(temp_dir.path()) .without_masternodes();Then return or hold
temp_diralongside the client to keep it alive for the entire test duration.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/wallet_integration_test.rs` at line 22, The TempDir created inline is dropped immediately causing its directory to be removed before DiskStorageManager::new() uses it; fix by creating and binding the TempDir to a variable (e.g., let temp_dir = TempDir::new().unwrap();) and pass temp_dir.path() into with_storage_path, then keep temp_dir alive for the test duration (either return it from the helper or store it alongside the client variable) so DiskStorageManager::new()/with_storage_path sees a valid directory throughout the test.dash-spv-ffi/src/client.rs (4)
415-436:⚠️ Potential issue | 🟡 Minor
dash_spv_ffi_client_clear_storagebypasses task cancellationThe function calls
client.inner.stop()directly without cancelling the shutdown token or aborting monitoring tasks. Active monitoring tasks will only exit when their channels close naturally (which happens as a side effect ofinner.stop()). More critically, once theis_runningguard proposed above is added, this function must also reset that flag; otherwise a subsequentrun()call afterclear_storagewill be rejected.Consider routing through
stop_client_internal(or an equivalent helper) to ensure consistent teardown:🛡️ Proposed fix
pub unsafe extern "C" fn dash_spv_ffi_client_clear_storage(client: *mut FFIDashSpvClient) -> i32 { null_check!(client); let client = &mut (*client); + + // Ensure all tasks are stopped before wiping storage + if let Err(e) = stop_client_internal(client) { + tracing::warn!("Failed to stop client before clearing storage: {e}"); + } let result = client.runtime.block_on(async { - if let Err(e) = client.inner.stop().await { - tracing::warn!("Failed to stop client before clearing storage: {}", e); - } client.inner.clear_storage().await });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/client.rs` around lines 415 - 436, dash_spv_ffi_client_clear_storage currently calls client.inner.stop() directly which bypasses proper cancellation and does not reset the is_running guard; route teardown through the existing stop_client_internal (or create an equivalent helper) so it cancels the shutdown token, aborts/joins monitoring tasks, performs inner.stop(), and resets the is_running flag so a subsequent run() is allowed; update dash_spv_ffi_client_clear_storage to invoke that helper (and propagate errors via set_last_error/FFIErrorCode as before) instead of calling client.inner.stop() directly.
309-343:⚠️ Potential issue | 🟠 Major
.unwrap()on mutex locks insideunsafe extern "C"is undefined behaviour on panicA poisoned mutex (caused by a prior panic while the lock was held) will cause
unwrap()to panic. Panicking across an FFI boundary is UB in Rust. The same applies tocancel_active_tasks()(line 196) andstop_client_internal()(line 214).Replace each
.lock().unwrap()in FFI function bodies with an explicit error return:🛡️ Proposed fix (example for `dash_spv_ffi_client_run`)
- let mut tasks = client.active_tasks.lock().unwrap(); + let mut tasks = match client.active_tasks.lock() { + Ok(guard) => guard, + Err(e) => { + set_last_error(&format!("Internal mutex error: {e}")); + return FFIErrorCode::InternalError as i32; + } + }; - if client.sync_event_callbacks.lock().unwrap().is_some() { + if matches!(client.sync_event_callbacks.lock(), Ok(g) if g.is_some()) {Apply the same pattern to all
lock().unwrap()calls inextern "C"function bodies and incancel_active_tasks/stop_client_internal.As per coding guidelines: "Avoid
unwrap()andexpect()in library code" and "Exercise careful memory safety handling at FFI boundaries."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/client.rs` around lines 309 - 343, The FFI functions (e.g., dash_spv_ffi_client_run and the helper functions cancel_active_tasks and stop_client_internal) currently call .lock().unwrap() on Mutexes (client.active_tasks, client.sync_event_callbacks, client.network_event_callbacks, client.progress_callback, client.wallet_event_callbacks) which can panic if the mutex is poisoned and thus cause undefined behavior across the FFI boundary; replace each .lock().unwrap() with explicit locking that handles PoisonError (e.g., match client.X.lock() { Ok(guard) => ..., Err(e) => return an appropriate error code/result to the caller } ), returning a failure code or error value from the extern "C" function instead of panicking, and apply the same pattern for cancel_active_tasks() and stop_client_internal() so no unwrap() can panic in any FFI-exposed path.
287-371:⚠️ Potential issue | 🔴 Critical
dash_spv_ffi_client_runhas no guard against double invocation — duplicate tasks will be spawnedIf called while already running, the function will:
- Subscribe to all event channels again (creating new receivers)
- Append another set of monitoring tasks to
active_tasks- Spawn a second
spv_client.run(shutdown_token)concurrent run loopThis results in duplicate event callbacks being fired and two concurrent sync loops operating on the same inner client, which can cause data races and undefined behaviour.
Add an
is_runningatomic flag to prevent re-entry:🔒 Proposed fix
Add to
FFIDashSpvClient:pub struct FFIDashSpvClient { pub(crate) inner: InnerClient, pub(crate) runtime: Arc<Runtime>, active_tasks: Mutex<Vec<JoinHandle<()>>>, shutdown_token: CancellationToken, + is_running: std::sync::atomic::AtomicBool, sync_event_callbacks: Arc<Mutex<Option<FFISyncEventCallbacks>>>, ... }Initialise in
dash_spv_ffi_client_new:let ffi_client = FFIDashSpvClient { inner: client, runtime, active_tasks: Mutex::new(Vec::new()), shutdown_token: CancellationToken::new(), + is_running: std::sync::atomic::AtomicBool::new(false), ... };Guard at the start of
dash_spv_ffi_client_run:+ use std::sync::atomic::Ordering; + if client.is_running + .compare_exchange(false, true, Ordering::AcqRel, Ordering::Acquire) + .is_err() + { + set_last_error("Client is already running"); + return FFIErrorCode::AlreadyRunning as i32; // or suitable variant + }Clear in
stop_client_internal:+ use std::sync::atomic::Ordering; + client.is_running.store(false, Ordering::Release); client.shutdown_token = CancellationToken::new(); result🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/client.rs` around lines 287 - 371, dash_spv_ffi_client_run allows re-entry and can spawn duplicate tasks; add an AtomicBool flag is_running on FFIDashSpvClient, initialize it false in dash_spv_ffi_client_new, then at the top of dash_spv_ffi_client_run perform an atomic compare-and-swap (or swap with Ordering::SeqCst) to set is_running->true and return an error code if it was already true, proceed only if you successfully set it; ensure stop_client_internal (and any early-error paths from run) atomically clear is_running->false so subsequent runs are allowed. Use the FFIDashSpvClient type, dash_spv_ffi_client_run, dash_spv_ffi_client_new and stop_client_internal identifiers when applying the changes.
484-501:⚠️ Potential issue | 🟠 MajorAlign
dash_spv_ffi_client_destroyshutdown order withstop_client_internalfor consistency and efficiency.
stop_client_internal(line 213):cancel_token→abort_tasks→inner.stop()
dash_spv_ffi_client_destroy(line 484):cancel_token→inner.stop()→abort_tasksThe background
run()task (spawned at client.rs:356) listens to token cancellation and callsself.stop()when exiting (sync_coordinator.rs:59). Indestroy, this causesinner.stop()to be called twice: once by the exiting background task and once by the destroy function. Whilestop()is idempotent (checks running flag before proceeding), the double-call is inefficient. Reorderdestroyto abort tasks first, preventing the background task from callingstop(), consistent withstop_client_internal.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/client.rs` around lines 484 - 501, dash_spv_ffi_client_destroy currently cancels the shutdown token, then calls client.inner.stop() and only afterward aborts active tasks, causing the background run() task to race and call stop() again; reorder the shutdown to match stop_client_internal by: call client.shutdown_token.cancel(), then immediately call client.cancel_active_tasks() to abort/wait active background tasks, and only after tasks are aborted call client.runtime.block_on(async { let _ = client.inner.stop().await; }); ensure you reference the same symbols (dash_spv_ffi_client_destroy, client.shutdown_token.cancel(), client.cancel_active_tasks(), client.inner.stop()) when making the change.
🧹 Nitpick comments (2)
dash-spv-ffi/FFI_API.md (1)
125-125:dash_spv_ffi_client_runis categorized under "Utility Functions" instead of "Client Management".Since
runis the primary entry point for starting and syncing the client, it logically belongs alongsidenew,stop, anddestroyin the Client Management section. This would make the Client Management count 4 (matching the pre-refactor count). Since this doc is auto-generated, the fix would be in the generation script (dash-spv-ffi/scripts/generate_ffi_docs.py).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/FFI_API.md` at line 125, The table entry for `dash_spv_ffi_client_run` is currently placed under "Utility Functions" but should be in the "Client Management" section; update the documentation generation logic in the script generate_ffi_docs.py to classify `dash_spv_ffi_client_run` alongside `dash_spv_ffi_client_new`, `dash_spv_ffi_client_stop`, and `dash_spv_ffi_client_destroy` so the Client Management group contains four entries, ensuring the generator uses the `dash_spv_ffi_client_run` symbol (or its signature) to assign its category accordingly and regenerate the markdown.dash-spv-ffi/src/client.rs (1)
311-352: Unnecessaryresubscribe()— original receivers can be moved directly
sync_event_rx,network_event_rx, andwallet_event_rxare each used exactly once (only inside the conditional block) and are dropped at the end of the function. Calling.resubscribe()creates a new subscription starting at the current instant while the original receiver remains alive until the function returns, introducing a tiny window where events could be buffered in the original and silently dropped. Moving the original receiver into the spawn closure is both simpler and semantically correct.♻️ Proposed refactor (sync event example)
- if client.sync_event_callbacks.lock().unwrap().is_some() { + if matches!(client.sync_event_callbacks.lock(), Ok(g) if g.is_some()) { tasks.push(spawn_broadcast_monitor( "Sync event", - sync_event_rx.resubscribe(), + sync_event_rx, ... )); }Apply the same pattern to
network_event_rxandwallet_event_rx.Similarly, the
let mut receiver = receiver;rebinding on line 32 ofspawn_broadcast_monitoris unnecessary — the parameter itself can be declaredmut:-fn spawn_broadcast_monitor<E, C, F>( - name: &'static str, - receiver: broadcast::Receiver<E>, +fn spawn_broadcast_monitor<E, C, F>( + name: &'static str, + mut receiver: broadcast::Receiver<E>, ... ) -> JoinHandle<()> { - let mut receiver = receiver; rt.spawn(async move {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/client.rs` around lines 311 - 352, Replace the unnecessary .resubscribe() calls by moving the original receivers into the spawned monitors: call spawn_broadcast_monitor with sync_event_rx, network_event_rx, and wallet_event_rx (not resubscribe()), and keep spawn_progress_monitor using progress_rx.clone() as before; this prevents the original receiver from lingering and losing events. Also remove the redundant rebinding in spawn_broadcast_monitor by making the receiver parameter mut (modify the spawn_broadcast_monitor signature to accept a mut receiver) and delete the `let mut receiver = receiver;` line so the function uses the parameter directly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dash-spv/README.md`:
- Around line 61-64: The README example uses DashSpvClient::new(config).await?
but the actual constructor requires additional arguments; update the example to
call DashSpvClient::new(config, network_manager, storage_manager, wallet).await?
(or, if you prefer the simplified snippet, add a short comment above the example
saying the real signature is DashSpvClient::new(config, network_manager,
storage_manager, wallet) and the example omits dependency setup for brevity) to
match the real DashSpvClient::new signature and avoid confusion.
In `@dash-spv/tests/peer_test.rs`:
- Around line 63-64: The teardown currently calls cancel.cancel();
handle.await.unwrap().unwrap(); which lets any SpvError returned by run() turn
cancellation races into test failures; update each occurrence (lines with
handle.await.unwrap().unwrap()) to await the task, unwrap the JoinHandle result,
then match the inner Result from run() and treat shutdown/cancellation-related
SpvError variants (e.g., SpvError::Shutdown or the network-disconnect variant
your code uses) as non-fatal (ignore/ok) while still propagating other errors as
panics. Locate the uses by the variables cancel and handle and the run()
function/SpvError type and replace the double unwrap with an explicit match that
only fails the test on true unexpected errors.
---
Outside diff comments:
In `@dash-spv-ffi/scripts/generate_ffi_docs.py`:
- Around line 145-146: The categorization check in generate_ffi_docs.py
currently looks for 'client_new', 'client_start', 'client_stop', and
'client_destroy' and will miss the renamed function dash_spv_ffi_client_run;
update the condition that appends to categories['Client Management'] to include
'client_run' (or better, match on the 'client_' prefix with the set of
management verbs) so that functions like dash_spv_ffi_client_run are classified
under "Client Management" (refer to the if condition that currently checks for
'client_new'/'client_start'/'client_stop'/'client_destroy' and the
categories['Client Management'].append(func) call).
In `@dash-spv-ffi/src/client.rs`:
- Around line 415-436: dash_spv_ffi_client_clear_storage currently calls
client.inner.stop() directly which bypasses proper cancellation and does not
reset the is_running guard; route teardown through the existing
stop_client_internal (or create an equivalent helper) so it cancels the shutdown
token, aborts/joins monitoring tasks, performs inner.stop(), and resets the
is_running flag so a subsequent run() is allowed; update
dash_spv_ffi_client_clear_storage to invoke that helper (and propagate errors
via set_last_error/FFIErrorCode as before) instead of calling
client.inner.stop() directly.
- Around line 309-343: The FFI functions (e.g., dash_spv_ffi_client_run and the
helper functions cancel_active_tasks and stop_client_internal) currently call
.lock().unwrap() on Mutexes (client.active_tasks, client.sync_event_callbacks,
client.network_event_callbacks, client.progress_callback,
client.wallet_event_callbacks) which can panic if the mutex is poisoned and thus
cause undefined behavior across the FFI boundary; replace each .lock().unwrap()
with explicit locking that handles PoisonError (e.g., match client.X.lock() {
Ok(guard) => ..., Err(e) => return an appropriate error code/result to the
caller } ), returning a failure code or error value from the extern "C" function
instead of panicking, and apply the same pattern for cancel_active_tasks() and
stop_client_internal() so no unwrap() can panic in any FFI-exposed path.
- Around line 287-371: dash_spv_ffi_client_run allows re-entry and can spawn
duplicate tasks; add an AtomicBool flag is_running on FFIDashSpvClient,
initialize it false in dash_spv_ffi_client_new, then at the top of
dash_spv_ffi_client_run perform an atomic compare-and-swap (or swap with
Ordering::SeqCst) to set is_running->true and return an error code if it was
already true, proceed only if you successfully set it; ensure
stop_client_internal (and any early-error paths from run) atomically clear
is_running->false so subsequent runs are allowed. Use the FFIDashSpvClient type,
dash_spv_ffi_client_run, dash_spv_ffi_client_new and stop_client_internal
identifiers when applying the changes.
- Around line 484-501: dash_spv_ffi_client_destroy currently cancels the
shutdown token, then calls client.inner.stop() and only afterward aborts active
tasks, causing the background run() task to race and call stop() again; reorder
the shutdown to match stop_client_internal by: call
client.shutdown_token.cancel(), then immediately call
client.cancel_active_tasks() to abort/wait active background tasks, and only
after tasks are aborted call client.runtime.block_on(async { let _ =
client.inner.stop().await; }); ensure you reference the same symbols
(dash_spv_ffi_client_destroy, client.shutdown_token.cancel(),
client.cancel_active_tasks(), client.inner.stop()) when making the change.
In `@dash-spv-ffi/tests/unit/test_client_lifecycle.rs`:
- Line 67: Update the stale ignore comment text so it references the current
function name: replace "client_start hangs without peers" with
"dash_spv_ffi_client_run hangs without peers" in the three #[ignore] attributes
in this test file (the ones currently annotating the tests around the client
lifecycle checks), and apply the same replacement for the other two occurrences
mentioned (the comments near the other #[ignore] attributes).
In `@dash-spv/tests/wallet_integration_test.rs`:
- Line 22: The TempDir created inline is dropped immediately causing its
directory to be removed before DiskStorageManager::new() uses it; fix by
creating and binding the TempDir to a variable (e.g., let temp_dir =
TempDir::new().unwrap();) and pass temp_dir.path() into with_storage_path, then
keep temp_dir alive for the test duration (either return it from the helper or
store it alongside the client variable) so
DiskStorageManager::new()/with_storage_path sees a valid directory throughout
the test.
---
Nitpick comments:
In `@dash-spv-ffi/FFI_API.md`:
- Line 125: The table entry for `dash_spv_ffi_client_run` is currently placed
under "Utility Functions" but should be in the "Client Management" section;
update the documentation generation logic in the script generate_ffi_docs.py to
classify `dash_spv_ffi_client_run` alongside `dash_spv_ffi_client_new`,
`dash_spv_ffi_client_stop`, and `dash_spv_ffi_client_destroy` so the Client
Management group contains four entries, ensuring the generator uses the
`dash_spv_ffi_client_run` symbol (or its signature) to assign its category
accordingly and regenerate the markdown.
In `@dash-spv-ffi/src/client.rs`:
- Around line 311-352: Replace the unnecessary .resubscribe() calls by moving
the original receivers into the spawned monitors: call spawn_broadcast_monitor
with sync_event_rx, network_event_rx, and wallet_event_rx (not resubscribe()),
and keep spawn_progress_monitor using progress_rx.clone() as before; this
prevents the original receiver from lingering and losing events. Also remove the
redundant rebinding in spawn_broadcast_monitor by making the receiver parameter
mut (modify the spawn_broadcast_monitor signature to accept a mut receiver) and
delete the `let mut receiver = receiver;` line so the function uses the
parameter directly.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
dash-spv-ffi/FFI_API.md (1)
960-967:⚠️ Potential issue | 🟡 MinorStale callback example references non-existent
dash_spv_ffi_client_set_event_callbacks.The Event Callbacks example on line 967 calls
dash_spv_ffi_client_set_event_callbackswith anFFIEventCallbacksstruct, but the current API uses separate setters (set_sync_event_callbacks,set_network_event_callbacks,set_wallet_event_callbacks) with their respective types. This predates the PR but is user-facing documentation that could confuse consumers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/FFI_API.md` around lines 960 - 967, The example uses a stale helper dash_spv_ffi_client_set_event_callbacks and FFIEventCallbacks; update the example to call the current per-domain setters instead: replace the single FFIEventCallbacks usage with the appropriate structs (e.g. FFIClientSyncEventCallbacks, FFIClientNetworkEventCallbacks, FFIClientWalletEventCallbacks) and call the corresponding APIs set_sync_event_callbacks, set_network_event_callbacks, and set_wallet_event_callbacks on the client, wiring the on_block/on_transaction handlers into the correct struct fields and passing the client and struct to each setter.
🧹 Nitpick comments (5)
dash-spv-ffi/src/client.rs (2)
354-364: Backgroundrun()errors are only logged, not surfaced to the caller.Errors from
spv_client.run(shutdown_token)on line 359 are logged viatracing::error!but never propagated to the FFI caller. Sincedash_spv_ffi_client_runreturns immediately (fire-and-forget), this is the expected design — but callers have no way to learn about fatal runtime errors asynchronously other than through event callbacks. Worth a brief doc note if not already covered.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/client.rs` around lines 354 - 364, The spawned sync task currently logs errors from spv_client.run(shutdown_token) but never surfaces them to the FFI caller; update the task started in client.runtime.spawn to propagate fatal run() errors via the existing FFI event/error path (e.g., send an Error/RuntimeFailure event through the client's event sender or a shared error channel) and ensure dash_spv_ffi_client_run's docs note that async runtime errors are reported via that event callback; locate the task by the closure that calls spv_client.run, use the client's event sender (or add one) to send a serialized error (including context) instead of only tracing::error!, and update any docs/comments for dash_spv_ffi_client_run to explain the async error reporting mechanism.
287-371: No guard against callingrun()more than once withoutstop()in between.If a caller invokes
dash_spv_ffi_client_runtwice, a second set of monitoring tasks and a secondspv_client.run(shutdown_token)are spawned concurrently with the first, leading to duplicate event dispatching and unpredictable behavior. Consider adding a simple "running" check (e.g., anAtomicBool) and returning an error code if already running.♻️ Sketch of a guard
Add an
is_running: AtomicBoolfield toFFIDashSpvClient, then at the top ofdash_spv_ffi_client_run:+ use std::sync::atomic::{AtomicBool, Ordering}; + // In FFIDashSpvClient struct: + // is_running: AtomicBool, + let client = &(*client); + + if client.is_running.swap(true, Ordering::SeqCst) { + set_last_error("Client is already running"); + return FFIErrorCode::InvalidState as i32; + }Reset it in
stop_client_internal.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/client.rs` around lines 287 - 371, Add a running guard to prevent double runs: add an is_running: AtomicBool field to FFIDashSpvClient, then at the top of dash_spv_ffi_client_run check and set it atomically (e.g., compare_exchange(false, true)) and return a suitable FFI error code if it was already true; ensure you clear/reset is_running to false in stop_client_internal (and on any early failure paths) so the client can be started again. Use a consistent memory ordering (e.g., Ordering::SeqCst) and reference FFIDashSpvClient, dash_spv_ffi_client_run, stop_client_internal, and is_running in your changes.dash-spv/src/client/sync_coordinator.rs (1)
25-26: Note: The previous learning about internal Ctrl-C handling is now outdated.Based on learnings, "The DashSpvClient::run method in dash-spv internally handles Ctrl-C shutdown by spawning a task that listens for tokio::signal::ctrl_c()". This PR intentionally moves Ctrl-C handling out of the client (as seen in
dash-spv/src/main.rslines 327-336 where it's now the caller's responsibility). The learning should be updated to reflect that callers must cancel the token themselves (e.g., on Ctrl-C).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/client/sync_coordinator.rs` around lines 25 - 26, Update the documentation/comments for DashSpvClient::run (the async run(&self, token: CancellationToken) method in sync_coordinator.rs) to remove any mention of internal Ctrl-C handling and explicitly state that run does not capture or handle Ctrl-C; callers are responsible for cancelling the provided CancellationToken (for example by listening for tokio::signal::ctrl_c() in main). Also verify any README or inline docs referencing this behavior are changed to reflect that shutdown must be initiated by the caller.dash-spv/src/main.rs (1)
330-331: Consider logging the Ctrl-C registration error instead of silently discarding it.If
tokio::signal::ctrl_c()fails (e.g., signal registration unavailable),.ok()silently drops the error. The task then callsctrl_c_token.cancel()after the select, which is fine, but the actual OS signal will never be received. A brief warning log would aid diagnostics.🔧 Proposed improvement
- result.ok(); - tracing::debug!("Shutdown signal received"); + if let Err(e) = result { + tracing::warn!("Failed to listen for Ctrl-C signal: {}", e); + } else { + tracing::debug!("Shutdown signal received"); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/main.rs` around lines 330 - 331, Replace the silent discard of the ctrl-c registration error: where the select arm handles result = tokio::signal::ctrl_c() and currently calls result.ok(), change it to explicitly handle the Err case by logging a warning including the error details (e.g., via the project logger) before proceeding to call ctrl_c_token.cancel(); keep the existing flow for the Ok case but ensure any Err is surfaced in the logs so failures to register the OS signal are visible.dash-spv/tests/wallet_integration_test.rs (1)
65-65: Use.expect()for both unwraps to get actionable failure messages.🔧 Proposed improvement
- handle.await.unwrap().unwrap(); + handle.await.expect("run task panicked").expect("client.run returned an error");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/tests/wallet_integration_test.rs` at line 65, Replace the double unwrap on the task handle to provide actionable failure messages: instead of calling handle.await.unwrap().unwrap(), call handle.await.expect("task join panicked in test").expect("wallet operation returned error") so the outer .await join error and the inner result both yield clear messages; locate the expression using handle.await.unwrap().unwrap() in the test and swap each unwrap for an expect with a short, specific message.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@dash-spv-ffi/FFI_API.md`:
- Around line 960-967: The example uses a stale helper
dash_spv_ffi_client_set_event_callbacks and FFIEventCallbacks; update the
example to call the current per-domain setters instead: replace the single
FFIEventCallbacks usage with the appropriate structs (e.g.
FFIClientSyncEventCallbacks, FFIClientNetworkEventCallbacks,
FFIClientWalletEventCallbacks) and call the corresponding APIs
set_sync_event_callbacks, set_network_event_callbacks, and
set_wallet_event_callbacks on the client, wiring the on_block/on_transaction
handlers into the correct struct fields and passing the client and struct to
each setter.
---
Nitpick comments:
In `@dash-spv-ffi/src/client.rs`:
- Around line 354-364: The spawned sync task currently logs errors from
spv_client.run(shutdown_token) but never surfaces them to the FFI caller; update
the task started in client.runtime.spawn to propagate fatal run() errors via the
existing FFI event/error path (e.g., send an Error/RuntimeFailure event through
the client's event sender or a shared error channel) and ensure
dash_spv_ffi_client_run's docs note that async runtime errors are reported via
that event callback; locate the task by the closure that calls spv_client.run,
use the client's event sender (or add one) to send a serialized error (including
context) instead of only tracing::error!, and update any docs/comments for
dash_spv_ffi_client_run to explain the async error reporting mechanism.
- Around line 287-371: Add a running guard to prevent double runs: add an
is_running: AtomicBool field to FFIDashSpvClient, then at the top of
dash_spv_ffi_client_run check and set it atomically (e.g.,
compare_exchange(false, true)) and return a suitable FFI error code if it was
already true; ensure you clear/reset is_running to false in stop_client_internal
(and on any early failure paths) so the client can be started again. Use a
consistent memory ordering (e.g., Ordering::SeqCst) and reference
FFIDashSpvClient, dash_spv_ffi_client_run, stop_client_internal, and is_running
in your changes.
In `@dash-spv/src/client/sync_coordinator.rs`:
- Around line 25-26: Update the documentation/comments for DashSpvClient::run
(the async run(&self, token: CancellationToken) method in sync_coordinator.rs)
to remove any mention of internal Ctrl-C handling and explicitly state that run
does not capture or handle Ctrl-C; callers are responsible for cancelling the
provided CancellationToken (for example by listening for tokio::signal::ctrl_c()
in main). Also verify any README or inline docs referencing this behavior are
changed to reflect that shutdown must be initiated by the caller.
In `@dash-spv/src/main.rs`:
- Around line 330-331: Replace the silent discard of the ctrl-c registration
error: where the select arm handles result = tokio::signal::ctrl_c() and
currently calls result.ok(), change it to explicitly handle the Err case by
logging a warning including the error details (e.g., via the project logger)
before proceeding to call ctrl_c_token.cancel(); keep the existing flow for the
Ok case but ensure any Err is surfaced in the logs so failures to register the
OS signal are visible.
In `@dash-spv/tests/wallet_integration_test.rs`:
- Line 65: Replace the double unwrap on the task handle to provide actionable
failure messages: instead of calling handle.await.unwrap().unwrap(), call
handle.await.expect("task join panicked in test").expect("wallet operation
returned error") so the outer .await join error and the inner result both yield
clear messages; locate the expression using handle.await.unwrap().unwrap() in
the test and swap each unwrap for an expect with a short, specific message.
4f782c0 to
dfb0355
Compare
|
This PR has merge conflicts with the base branch. Please rebase or merge the base branch into your branch to resolve them. |
dfb0355 to
b7eda1c
Compare
Combine `start()` + `monitor_network()` + `stop()` into a single `run(token)` entry point. Callers no longer need to call `start()` separately since `run()` handles the full lifecycle and returns after the token is cancelled. - Make `start()` an internal function - Remove `monitor_network()` its loop is now inside `run()` - Remove `dash_spv_ffi_client_start` so that `dash_spv_ffi_client_run` is the only FFI entry point - Move ctrl-c handling out of the client
b7eda1c to
64e90d1
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
dash-spv-ffi/scripts/generate_ffi_docs.py (1)
145-146:⚠️ Potential issue | 🟡 Minor
dash_spv_ffi_client_runwon't be categorized under "Client Management".The categorization on line 145 checks for
'client_start'but the function was renamed todash_spv_ffi_client_run. This meansdash_spv_ffi_client_runwill fall through to the "Utility Functions" catch-all instead of appearing under "Client Management" in the generated docs.Proposed fix
- if 'client_new' in name or 'client_start' in name or 'client_stop' in name or 'client_destroy' in name: + if 'client_new' in name or 'client_run' in name or 'client_stop' in name or 'client_destroy' in name:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/scripts/generate_ffi_docs.py` around lines 145 - 146, The current categorization checks for 'client_start' but the function was renamed to dash_spv_ffi_client_run, so add that name to the Client Management check: update the conditional that appends to categories['Client Management'] (which currently checks for 'client_new', 'client_start', 'client_stop', 'client_destroy') to also match 'client_run' or 'dash_spv_ffi_client_run' so dash_spv_ffi_client_run is included under Client Management.dash-spv-ffi/src/client.rs (1)
356-370:⚠️ Potential issue | 🟠 Major
dash_spv_ffi_client_runcan report success even whenrun()fails.At Line [370] this function always returns success, while errors from
spv_client.run(...)at Line [359]-Line [360] are only logged in a spawned task. This also allows repeatedrun()calls to silently create duplicate monitor/run tasks. Add a preflight running-state guard and a startup handshake/error channel so the FFI return code reflects startup failure states.
♻️ Duplicate comments (1)
dash-spv/README.md (1)
61-64: LGTM on therun()update.The documentation correctly reflects the new single-entry-point
run(shutdown_token)API. The constructor signature mismatch with the real API was already flagged in a prior review.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/README.md` around lines 61 - 64, The README example uses DashSpvClient::new(config).await? which doesn't match the real constructor; update the snippet to call the actual constructor signature used in the codebase and await/unwrap as needed (e.g., replace DashSpvClient::new with the real factory/async initializer name and adjust arguments), keep the run(shutdown_token) usage and CancellationToken::new() as shown so the example matches the actual API (refer to the DashSpvClient::new / DashSpvClient::<actual_constructor> and run methods to mirror the exact async/argument pattern).
🧹 Nitpick comments (3)
dash-spv-ffi/tests/unit/test_client_lifecycle.rs (2)
111-112: Staleclient_startreference in ignore comment.Line 111's
#[ignore]comment still says "client_start hangs without peers" — should referenceclient_run.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/tests/unit/test_client_lifecycle.rs` around lines 111 - 112, Update the stale ignore comment on the test function test_client_with_no_peers: change the message "client_start hangs without peers" to reference the correct symbol "client_run" (e.g., "Requires network - client_run hangs without peers") so the attribute #[ignore] accurately documents why the test is skipped.
67-68: Stale references toclient_startin test name and ignore comment.The
#[ignore]comment on line 67 still says "client_start hangs" but the code now callsdash_spv_ffi_client_run. Similarly, the test nametest_client_start_stop_restartcould be updated to reflect the new API. Consider updating for consistency:Suggested updates
- #[ignore] // Requires network - client_start hangs without peers - fn test_client_start_stop_restart() { + #[ignore] // Requires network - client_run hangs without peers + fn test_client_run_stop_restart() {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/tests/unit/test_client_lifecycle.rs` around lines 67 - 68, Update the stale test metadata and name to reflect the new API: change the #[ignore] message to mention "dash_spv_ffi_client_run hangs without peers" (or similar) instead of "client_start hangs", and rename the test function from test_client_start_stop_restart to something like test_client_run_stop_restart (or test_dash_spv_ffi_client_run_lifecycle) so the name matches the call to dash_spv_ffi_client_run; update any internal comments that refer to client_start to refer to dash_spv_ffi_client_run as well.dash-spv-ffi/src/client.rs (1)
273-275: Prefer “task” instead of “thread” in therundocs.The implementation uses
tokio::spawn(runtime tasks), not dedicated OS thread management; this wording can mislead callback/thread-affinity expectations.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/src/client.rs` around lines 273 - 275, The doc comment for the function that "spawns a background thread that calls `run()`" is misleading because the implementation uses tokio tasks, not OS threads; update the wording to say "task" (e.g., "spawns a background task that calls `run()`") and clarify that `run()` is executed via `tokio::spawn` (or runtime task) and not a dedicated OS thread; adjust any other occurrences of "thread"/"threads" in the same comment to "task"/"tasks" and keep the reference to `run()` unchanged so readers can locate the implementation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@dash-spv-ffi/scripts/generate_ffi_docs.py`:
- Around line 145-146: The current categorization checks for 'client_start' but
the function was renamed to dash_spv_ffi_client_run, so add that name to the
Client Management check: update the conditional that appends to
categories['Client Management'] (which currently checks for 'client_new',
'client_start', 'client_stop', 'client_destroy') to also match 'client_run' or
'dash_spv_ffi_client_run' so dash_spv_ffi_client_run is included under Client
Management.
---
Duplicate comments:
In `@dash-spv/README.md`:
- Around line 61-64: The README example uses DashSpvClient::new(config).await?
which doesn't match the real constructor; update the snippet to call the actual
constructor signature used in the codebase and await/unwrap as needed (e.g.,
replace DashSpvClient::new with the real factory/async initializer name and
adjust arguments), keep the run(shutdown_token) usage and
CancellationToken::new() as shown so the example matches the actual API (refer
to the DashSpvClient::new / DashSpvClient::<actual_constructor> and run methods
to mirror the exact async/argument pattern).
---
Nitpick comments:
In `@dash-spv-ffi/src/client.rs`:
- Around line 273-275: The doc comment for the function that "spawns a
background thread that calls `run()`" is misleading because the implementation
uses tokio tasks, not OS threads; update the wording to say "task" (e.g.,
"spawns a background task that calls `run()`") and clarify that `run()` is
executed via `tokio::spawn` (or runtime task) and not a dedicated OS thread;
adjust any other occurrences of "thread"/"threads" in the same comment to
"task"/"tasks" and keep the reference to `run()` unchanged so readers can locate
the implementation.
In `@dash-spv-ffi/tests/unit/test_client_lifecycle.rs`:
- Around line 111-112: Update the stale ignore comment on the test function
test_client_with_no_peers: change the message "client_start hangs without peers"
to reference the correct symbol "client_run" (e.g., "Requires network -
client_run hangs without peers") so the attribute #[ignore] accurately documents
why the test is skipped.
- Around line 67-68: Update the stale test metadata and name to reflect the new
API: change the #[ignore] message to mention "dash_spv_ffi_client_run hangs
without peers" (or similar) instead of "client_start hangs", and rename the test
function from test_client_start_stop_restart to something like
test_client_run_stop_restart (or test_dash_spv_ffi_client_run_lifecycle) so the
name matches the call to dash_spv_ffi_client_run; update any internal comments
that refer to client_start to refer to dash_spv_ffi_client_run as well.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (19)
dash-spv-ffi/FFI_API.mddash-spv-ffi/README.mddash-spv-ffi/include/dash_spv_ffi.hdash-spv-ffi/scripts/generate_ffi_docs.pydash-spv-ffi/src/client.rsdash-spv-ffi/tests/c_tests/test_basic.cdash-spv-ffi/tests/test_client.rsdash-spv-ffi/tests/unit/test_async_operations.rsdash-spv-ffi/tests/unit/test_client_lifecycle.rsdash-spv/README.mddash-spv/examples/filter_sync.rsdash-spv/examples/simple_sync.rsdash-spv/examples/spv_with_wallet.rsdash-spv/src/client/lifecycle.rsdash-spv/src/client/sync_coordinator.rsdash-spv/src/lib.rsdash-spv/src/main.rsdash-spv/tests/peer_test.rsdash-spv/tests/wallet_integration_test.rs
💤 Files with no reviewable changes (3)
- dash-spv/examples/spv_with_wallet.rs
- dash-spv/examples/filter_sync.rs
- dash-spv/examples/simple_sync.rs
🚧 Files skipped from review as they are similar to previous changes (5)
- dash-spv/tests/peer_test.rs
- dash-spv-ffi/tests/unit/test_async_operations.rs
- dash-spv/src/lib.rs
- dash-spv-ffi/tests/test_client.rs
- dash-spv/src/client/sync_coordinator.rs
Combine
start()+monitor_network()+stop()into a singlerun(token)entry point. Callers no longer need to callstart()separately sincerun()handles the full lifecycle and returns after the token is cancelled.start()an internal functionmonitor_network()its loop is now insiderun()dash_spv_ffi_client_startso thatdash_spv_ffi_client_runis the only FFI entry pointBased on:
DashSpvClientcloneable #453Summary by CodeRabbit
Release Notes
Breaking Changes
dash_spv_ffi_client_start()function. Usedash_spv_ffi_client_run()as the single entry point for client initialization and synchronization.run()handles starting, background syncing, and monitoring internally and returns immediately after spawning tasks.API Improvements
run()to receive sync, network, progress, and wallet notifications.