Skip to content

feat: add signet-rpc-storage crate with ETH RPC endpoints#75

Open
prestwich wants to merge 28 commits intomainfrom
feat/rpc-storage-scaffolding
Open

feat: add signet-rpc-storage crate with ETH RPC endpoints#75
prestwich wants to merge 28 commits intomainfrom
feat/rpc-storage-scaffolding

Conversation

@prestwich
Copy link
Member

@prestwich prestwich commented Feb 11, 2026

Summary

  • Adds the signet-rpc-storage crate providing Ethereum JSON-RPC endpoints backed by signet-storage (hot + cold), independent of reth's FullNodeComponents
  • StorageRpcCtx wraps UnifiedStorage, BlockTags, system constants, optional TxCache, and RPC gas cap
  • BlockTags provides atomic latest/safe/finalized block tracking with sync/async resolution
  • Implements 24 supported ETH namespace endpoints:
    • Simple queries: blockNumber, chainId
    • Block queries: getBlockByHash/Number, getBlockTransactionCount*, getBlockReceipts, getBlockHeader*
    • Transaction queries: getTransactionByHash, getRawTransactionByHash, *ByBlockAndIndex, getTransactionReceipt
    • Account state (hot storage): getBalance, getStorageAt, getTransactionCount, getCode
    • EVM execution: call, estimateGas via signet-evm/trevm
    • Transaction submission: sendRawTransaction via TxCache
    • Logs: getLogs with bloom filter matching
  • 30 unsupported methods return explicit "not supported" errors
  • Uses signet-storage 0.3.0 from crates.io

Test plan

  • cargo clippy -p signet-rpc-storage --all-features --all-targets — clean
  • cargo clippy -p signet-rpc-storage --no-default-features --all-targets — clean
  • cargo +nightly fmt — clean
  • cargo t -p signet-rpc-storage — 33 tests pass
  • cargo doc -p signet-rpc-storage — docs build successfully

🤖 Generated with Claude Code

@prestwich prestwich requested a review from a team as a code owner February 11, 2026 21:36
@prestwich prestwich changed the title feat: add signet-rpc-storage crate scaffolding feat: add signet-rpc-storage crate with ETH RPC endpoints Feb 12, 2026
@@ -0,0 +1,578 @@
//! Integration tests for the `signet-rpc-storage` ETH RPC endpoints.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Claude Code]

Test coverage gap: eth_call and eth_estimateGas are the only two supported endpoints without integration tests. They require full EVM state setup (deployed contract bytecode, proper storage) to exercise meaningfully. Worth adding as a follow-up.

The current tests cover all 22 other supported endpoints plus error paths.

@prestwich
Copy link
Member Author

[Claude Code]

Review Summary

What's here

  • 24 ETH JSON-RPC endpoints (22 supported + 23 explicitly unsupported)
  • Clean crate structure: ctx, resolve, eth/{endpoints, helpers, error, mod}
  • 23 integration tests covering all supported endpoints except eth_call and eth_estimateGas
  • No TODOs, no todo!(), no unimplemented!(), no naked unwrap() in production code
  • Clippy clean on both --all-features and --no-default-features

Issues flagged (inline comments)

Severity Location Issue
High endpoints.rs:647-651 sendRawTransaction fire-and-forget — forwarding errors silently discarded
Medium endpoints.rs:720-722 getLogs range validation — reversed from > to silently accepted
Low helpers.rs:274-276 unreachable!() in build_receipt_envelope — future alloy TxType variants will panic
Info tests/eth_rpc.rs eth_call and eth_estimateGas not yet tested (EVM state setup needed)

Codebase health

  • Zero TODO/FIXME comments
  • Zero panic!/unwrap in non-test code
  • Consistent error handling pattern (.map_err(|e| e.to_string()))
  • Good use of tokio::try_join! for concurrent cold storage reads
  • Tracing spans on EVM execution paths (eth_call, eth_estimateGas)

prestwich and others added 5 commits February 12, 2026 07:51
Add the foundational scaffolding for the signet-rpc-storage crate, which
provides an Ethereum JSON-RPC server backed by signet-storage's unified
storage backend, independent of reth's FullNodeComponents.

This includes:
- Workspace dependency additions (signet-storage, signet-cold, signet-hot,
  signet-storage-types)
- StorageRpcCtx context struct with Arc<Inner> pattern
- BlockTags atomic block tag tracker for Latest/Safe/Finalized
- Block ID and block tag resolution utilities
- Stub eth module (endpoints to be added in follow-up)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Implement all ETH namespace JSON-RPC endpoints backed by cold/hot storage
instead of reth. Converts eth.rs placeholder into eth/ directory module
with error types, helpers, and 24 supported endpoint handlers:

- Simple queries: blockNumber, chainId
- Block queries: getBlockByHash/Number, getBlockReceipts, headers
- Transaction queries: getTransactionByHash, getTransactionReceipt, raw txs
- Account state (hot storage): getBalance, getStorageAt, getCode, getTransactionCount
- EVM execution: call, estimateGas (via signet-evm/trevm)
- Transaction submission: sendRawTransaction (via TxCache)
- Logs: getLogs with bloom filter matching

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add 23 integration tests covering all endpoint categories: simple
queries, block/transaction lookups, account state, logs, and error
cases. Tests exercise the router through the axum service layer using
tower's oneshot().

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- send_raw_transaction: log warning on forwarding failure instead of
  silently discarding the error
- get_logs: reject reversed block ranges (from > to) with an explicit
  error instead of silently returning empty results
- build_receipt_envelope: remove catch-all arm so new TxType variants
  from alloy produce a compile error instead of a runtime panic

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Regular txs execute before system txs, not the other way around.

Drive-by from #74 (comment)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@prestwich prestwich force-pushed the feat/rpc-storage-scaffolding branch from 4341320 to 601c156 Compare February 12, 2026 12:51
prestwich and others added 3 commits February 12, 2026 08:07
- Extract `resolve_evm_block` method on `StorageRpcCtx` to deduplicate
  the block resolution + header fetch + revm db creation shared by
  `call()` and `estimate_gas()`. Resolves headers directly (by hash or
  by tag→number) to avoid redundant cold storage lookups.
- Replace glob import `use endpoints::*` with explicit imports.
- Remove unused `revm_state()` method from `StorageRpcCtx`.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ounds

- Move `resolve_block_id` and `resolve_block_number_or_tag` from free
  functions in resolve.rs to `resolve_block_id` and `resolve_block_tag`
  methods on `StorageRpcCtx`. This eliminates repeated `ctx.tags()` and
  `ctx.cold()` threading at every call site.
- `resolve_block_tag` returns `u64` directly (infallible) instead of
  `Result`, simplifying callers like `get_logs`.
- Remove `H::RoTx: Send + Sync + 'static` bounds from all endpoint
  functions, router, and ctx methods — the trait already provides these.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace the bare `rpc_gas_cap` constructor parameter with a
`StorageRpcConfig` struct that bundles all RPC configuration.
This moves `max_blocks_per_filter` from a hard-coded constant to
a configurable value, adds `max_logs_per_response` enforcement
in `eth_getLogs`, and pre-creates a tracing semaphore for future
debug endpoint concurrency limiting.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
let mut all_logs = Vec::new();

for (chunk_start, chunk_end) in
BlockRangeInclusiveIter::new(from..=to, MAX_HEADERS_RANGE)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm guessing the intent here is to return batches of 1000, but the way BlockRangeInclusiveIter is implemented, we'd need to pass MAX_HEADERS_RANGE - 1 as the step value.

I think it might be worth changing the BlockRangeInclusiveIter to treat the step arg the same way as the std lib - i.e. to make it mean "batch size" and not increment it by 1 internally. That would mean we'd also need to update BlockRangeInclusiveIter::next() to have let end = (start + self.step - 1).min(self.end);

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is basically ported from reth, so i'll take a deeper look here

// ---------------------------------------------------------------------------

pub(crate) async fn not_supported() -> ResponsePayload<(), ()> {
ResponsePayload::internal_error_message(Cow::Borrowed("Method not supported."))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably be method_not_found() rather than an internal error, so callers don't retry.

prestwich and others added 8 commits February 12, 2026 09:59
Add gas oracle, filters, subscriptions, debug tracing, and signet
namespaces to signet-rpc-storage. Port 15 endpoints from the old
reth-backed signet-rpc crate to the storage-backed architecture.

New modules:
- gas_oracle: cold-storage gas price oracle (suggest_tip_cap)
- interest/: filter manager, subscription manager, block notifications
- debug/: traceBlockByNumber, traceBlockByHash, traceTransaction
- signet/: sendOrder, callBundle

Wired eth endpoints: gasPrice, maxPriorityFeePerGas, feeHistory,
newFilter, newBlockFilter, uninstallFilter, getFilterChanges,
getFilterLogs, subscribe, unsubscribe.

Integration tests cover gas/fee queries, filter lifecycle, and
debug tracing with noop tracer.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Use `SealedHeader` with `.hash()` / `.into_inner()` instead of
  `header.hash_slow()`
- Use `RecoveredTx` (pre-recovered sender) instead of manual
  `recover_sender` calls
- Use `ColdReceipt` with per-tx `gas_used` instead of computing
  deltas from cumulative gas
- Delegate `get_logs` to cold storage instead of manual bloom
  filtering and block iteration
- Remove `BlockRangeInclusiveIter`, `collect_matching_logs`,
  `build_receipt_from_parts`, and `recover_sender` helpers
- Simplify `build_rpc_transaction` and `build_receipt` to be
  infallible

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Addresses PR review feedback:
- resolve_block_id now uses hot HotDbRead::get_header_number instead of
  cold storage, making it synchronous and avoiding async overhead
- Add resolve_header for direct header fetches from hot storage,
  eliminating the double header lookup in header_by
- Change not_supported() to return method_not_found() (JSON-RPC -32601)
  instead of internal_error (-32603)
- Update ResolveError to use Storage/Db variants instead of Cold
- Update tests to write headers to both hot and cold storage

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Swap return types to use RpcBlock/RpcReceipt/RpcTransaction/RpcHeader
type aliases, rename tx_by_block_and_index to match rpc naming, fix
not_supported error message, and split call into run_call + call.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Align rpc-storage behavioral semantics with the rpc crate:
- subscribe: require filter for Logs, reject filter for NewHeads via TryFrom impl
- send_raw_transaction: return Result from spawned task instead of swallowing errors
- uninstall_filter/unsubscribe: add HandlerCtx and wrap in spawn_blocking

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…to 0.5

resolve_evm_block previously mapped Pending to Latest without modifying
the header, causing eth_estimateGas (which defaults to Pending) and
eth_call with explicit Pending to see wrong block.number, timestamp,
and base_fee. Now synthesizes a next-block header matching signet-rpc's
block_cfg() behavior.

Also refactors callBundle to use resolve_evm_block instead of duplicating
the pending header logic inline, and passes max_logs to cold.get_logs()
for early termination (signet-cold 0.5 API).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
}

/// Get the finalized block number.
pub fn finalized(&self) -> u64 {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a way to set all of these atomically at once

Copy link
Member Author

@prestwich prestwich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are there ways that we can DRY endpoint logic or definitions?

/// assert_eq!(tags.latest(), 101);
/// ```
#[derive(Debug, Clone)]
pub struct BlockTags {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we add syncing status information to this struct

#![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))]

mod config;
pub use config::StorageRpcConfig;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add newlines after each pub use

H: signet_hot::HotKv + Send + Sync + 'static,
<H::RoTx as signet_hot::model::HotKvRead>::Error: trevm::revm::database::DBErrorMarker,
{
ajj::Router::new().merge(eth::eth()).merge(debug::debug()).merge(signet::signet())
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these are not appropriately namespaced. we need to nest them with a prefix, as done in signet-rpc

}

/// Instantiate the `signet` API router.
pub fn signet<H>() -> ajj::Router<StorageRpcCtx<H>>
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these functions do not need to exist. we only need the main fully built router at crate root

@@ -0,0 +1,85 @@
//! Configuration for the storage-backed RPC server.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make a config/ module folder containing the contents of ctx.rs gas_oracle.rs config.rs and resolve.rs

H: HotKv + Send + Sync + 'static,
<H::RoTx as HotKvRead>::Error: DBErrorMarker,
{
let timeout = bundle.bundle.timeout.unwrap_or(1000);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the 1000 should be a config parameter

};

let task = |hctx: HandlerCtx| async move {
hctx.spawn(async move { tx_cache.forward_order(order).await.map_err(|e| e.to_string()) });
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this error swallowed?

/// The caller constructs and sends these via a
/// [`tokio::sync::broadcast::Sender`].
#[derive(Debug, Clone)]
pub struct NewBlockNotification {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how and where are resources managed for this? are these ever cloned?

@@ -0,0 +1,22 @@
//! Filter and subscription management for block/log notifications.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make comprehensive documentation for this module

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

include details about resource usage expectations, etc

thiserror.workspace = true
serde.workspace = true
dashmap = "6.1.0"
reth-rpc-eth-api.workspace = true
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is this used? how can we remove it

{
let opts = response_tri!(opts.ok_or(DebugError::InvalidTracerConfig));

let _permit = ctx.acquire_tracing_permit().await;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is this permit dropped?

let t = trevm.fill_tx(tx);
let frame;
(frame, trevm) = response_tri!(crate::debug::tracer::trace(t, &opts, tx_info));
frames.push(TraceResult::Success { result: frame, tx_hash: Some(*tx.tx_hash()) });
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we are collecting these frames only to immediately serailize them. is there a way we could feed these to the serializer via an iterator instead?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the same question applies to other instances where we return vecs

#[derive(Debug, Clone, thiserror::Error)]
pub enum DebugError {
/// Cold storage error.
#[error("cold storage: {0}")]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we avoid string-typing these?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should also avoid leaking the full error into the API return. ensure that there is a log of the full error, and the API returns a simple description

Copy link
Member Author

@prestwich prestwich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add many more comments throughout the endpoint code, explaining what is happening and why

prestwich and others added 8 commits February 14, 2026 13:03
- Restructure into config/ module folder (ctx, resolve, gas_oracle, rpc_config)
- Remove standalone pub fn wrappers, use .nest() for namespace prefixing
- Remove reth-rpc-eth-api dependency, use alloy Network trait type aliases
- Change not_supported to method_not_found, add concrete SignetError variants
- Sanitize DebugError Display output, add tracing::warn at error call sites
- Add fire-and-forget error logging for send_raw_transaction and send_order
- Add default_bundle_timeout_ms config, SyncStatus struct, BlockTags::update_all
- Add comprehensive interest/ module docs, endpoint doc comments, permit docs
- Extract hot_reader_at_block helper for account state endpoints
- Add FUTURE-EVALUATION.md documenting Vec collection constraint

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Bump ajj from 0.3.4 to 0.5.0 and adapt all call sites to the new API:

- ResponsePayload::Success/Failure replaced with ResponsePayload(Ok/Err)
- Subscription task rewritten to use ctx.notify() instead of raw channel
  access via the removed notifications() method

Leverage relaxed RpcSend bounds for lazy serialization:

- LazyReceipts: serializes receipts inline from raw ColdReceipt +
  RecoveredTx data without intermediate Vec<RpcReceipt>
- In-housed BlockTransactions and RpcBlock: serialize full transactions
  or hashes lazily from Vec<RecoveredTx>, hardcode empty uncles

Refactor build_receipt and build_rpc_transaction to take references,
cloning only where needed (logs Vec, RecoveredTx for into_inner).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Apply the same ajj 0.5.0 migration to the canonical rpc crate:

- ResponsePayload::Success/Failure replaced with ResponsePayload(Ok/Err)
- Subscription task rewritten to use ctx.notify() instead of the removed
  notifications() channel accessor

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ruct

Add SubscriptionNotification and SubscriptionParams structs with derived
Serialize impls, replacing hand-crafted json! macro usage in both rpc
and rpc-storage subscription tasks.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…rChanges

Replace `cold.get_logs()` with `cold.stream_logs()` in both endpoints to
gain deadline enforcement (configurable, default 10s), dedicated concurrency
control (separate semaphore, max 8 concurrent streams), and reorg detection
via anchor hash. Bump signet-storage crates to 0.6.0.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
/// tips, and returns the value at `gas_oracle_percentile`.
///
/// Returns `U256::ZERO` if no transactions are found in the range.
pub(crate) async fn suggest_tip_cap(
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is this used? these reads may be expensive. do we cache these?

DebugError::BlockNotFound(id)
}));

let (header, txs) = response_tri!(
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we use the resolve context method here?


// State BEFORE this block.
let db =
response_tri!(ctx.revm_state_at_height(block_num.saturating_sub(1)).map_err(|e| {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could this use the reoslve method as well?

// Not Supported
// ---------------------------------------------------------------------------

pub(crate) async fn not_supported() -> ResponsePayload<(), ()> {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can remove this, as this is default behavior when a method is unregistered.

// ---
// Unsupported methods
// ---
.route("protocolVersion", not_supported)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove all non-supported. leave a comment with a list of non-supported methods. move supported methods up


## Supported Methods

- Block queries: `eth_blockNumber`, `eth_getBlockByHash`, `eth_getBlockByNumber`, etc.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

update these. instead list unsupported methods and why

let guard = span.enter();
// This future checks if the notification buffer is non-empty and
// waits for the sender to have some capacity before sending.
let permit_fut = async {
Copy link
Member Author

@prestwich prestwich Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we're going to add an api to ajj to get permits. this code should be restored to prior state then modified to use SubscriptionNotificstion

/// JSON-RPC subscription notification envelope.
#[derive(serde::Serialize)]
struct SubscriptionNotification<'a> {
jsonrpc: &'static str,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this. it's always known, and never needs to be a struct prop.

// Drain one buffered item per iteration, checking for
// cancellation between each send.
if let Some(item) = notif_buffer.pop_front() {
let notification = SubscriptionNotification {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we're going to re-add permitting to ajj. this code should be revised to the previous permit-acquisition behavior, modified to use SubscriptionNotification

prestwich and others added 2 commits February 15, 2026 12:00
Move response/serialization types (EmptyArray, BlockTransactions,
RpcBlock, LazyReceipts) and type aliases from eth/endpoints.rs into
eth/types.rs, and parameter types (TraceBlockParams,
TraceTransactionParams) from debug/endpoints.rs into debug/types.rs.
Consolidate duplicate RpcTransaction/RpcReceipt aliases from
eth/helpers.rs into eth/types.rs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace the one-notification-per-iteration drain pattern with
permit_many batching for better throughput and fairness. Bumps ajj
to 0.6.2 for permit_many/notification_capacity support.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
/// The resolved block header.
pub header: alloy::consensus::Header,
/// The revm database at the resolved height.
pub db: trevm::revm::database::State<Db>,
Copy link
Member Author

@prestwich prestwich Feb 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why state? do we need that wrapper for anything?

…tracking

Move NodeStatus from signet-node-types to signet-node, move
ServeConfig/RpcServerGuard from signet-rpc to signet-node, and remove
the journal hash tracking system (JournalHashes table, journal module,
GENESIS_JOURNAL_HASH, and all related parameters/methods).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Member Author

@prestwich prestwich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Claude Code]

PR Review: signet-rpc-storage crate

The crate is well-organized and test coverage is solid. However, there are two likely bugs (leaked thread in SubCleanerTask, unsubscribe is a no-op), a conformance gap (eth_syncing), and a design issue with spawn_blocking being used for async work. See inline comments for details.

Design-level observations (not tied to a specific line):

StorageRpcConfig should have a builder — Per project conventions: "A struct is considered complex enough to warrant a builder if it has more than 4 fields." StorageRpcConfig has 11 fields.

Duplicated Either enumfilters.rs and subs.rs both define an Either enum with nearly identical semantics but different variants. Should be unified or renamed (FilterItem / SubscriptionItem).

Duplicated buffer/output typesFilterOutput and SubscriptionBuffer are structurally identical VecDeque-backed enums with Log/Block variants, len(), is_empty(), extend(), pop_front(). This could be a single generic type.

Error handling via String is inconsistent — Most endpoints return Result<T, String>, losing structured error info, while run_call/estimate_gas/create_access_list return ResponsePayload<T, CallErrorData>, and debug endpoints return ResponsePayload<T, DebugError>. The crate defines three good error types (EthError, DebugError, SignetError) but most endpoints discard them with .map_err(|e| e.to_string()).

}
}
});
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Claude Code]

Bug: SubCleanerTask::spawn never terminates when SubscriptionManager is dropped.

Compare with the correct pattern in FilterCleanTask::spawn (filters.rs:288-299), which checks Weak::upgrade() and breaks when it returns None. This version skips the retain but loops forever, leaking an OS thread per SubscriptionManager drop.

fn spawn(self) {
    std::thread::spawn(move || {
        loop {
            std::thread::sleep(self.interval);
            if let Some(inner) = self.inner.upgrade() {
                inner.tasks.retain(|_, task| !task.is_cancelled());
            }
            // missing: else { break; }
        }
    });
}

debug!(%id, "registered new subscription");

Some(id)
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Claude Code]

Bug: subscribe() never inserts the CancellationToken into self.tasks.

The token is created and cloned into the SubscriptionTask, but never stored in the DashMap. This means unsubscribe() will always return false (the remove finds nothing), and the SubCleanerTask has nothing to clean. The subscription can never be cancelled by the server.

// Should be:
self.tasks.insert(id, token.clone());

// Unsupported methods
// ---
.route("protocolVersion", not_supported)
.route("syncing", not_supported)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Claude Code]

eth_syncing should return false (when synced) or a sync-status object (when syncing), not method_not_found. You already have BlockTags::sync_status() and SyncStatus wired up for exactly this. Any client that calls eth_syncing — virtually all of them do — gets a confusing error instead of the expected false.

Ok(tip + U256::from(base_fee))
};

await_handler!(@option hctx.spawn_blocking(task))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Claude Code]

gas_price, max_priority_fee_per_gas, and fee_history all use hctx.spawn_blocking for tasks that contain .await calls (cold storage reads). spawn_blocking is for CPU-bound work — using it for IO-bound async work means the task runs on a blocking thread pool that doesn't poll futures efficiently. These should use hctx.spawn or hctx.spawn_with_ctx.

if cumulative_gas >= threshold {
break;
}
tx_idx += 1;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Claude Code]

cumulative_gas and tx_idx are never reset between percentile iterations. This is correct for sorted-ascending traversal, but once tx_idx increments past tx_gas_and_tip.len(), the while loop exits immediately for all subsequent percentiles, and tx_gas_and_tip.get(tx_idx) returns Noneunwrap_or_default() → zero. Higher percentiles silently return 0 for blocks with few transactions.

/// Either type for filter outputs.
#[derive(Debug, Clone, PartialEq, Eq, serde::Serialize)]
#[serde(untagged)]
#[allow(dead_code)]
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Claude Code]

Nit: #[allow(dead_code)] on Either, FilterOutput impls, and is_filter method. If they're not used, remove them. If they're part of the intended API, use them. allow(dead_code) on internal types is a code smell suggesting premature API surface.

)
.map_err(|e| e.to_string())?;

let tx = tx.ok_or("receipt found but transaction missing")?.into_inner();
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Claude Code]

Nit: .ok_or("receipt found but transaction missing") uses a string literal error. This should be a proper error variant on EthError.

let base_fee = header.base_fee_per_gas.unwrap_or_default();
let block_num = start + offset as u64;

let txs = cold.get_transactions_in_block(block_num).await?;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Claude Code]

Nit: This iterates headers sequentially, making one get_transactions_in_block call per block. For the default 20-block lookback, that's 20 sequential cold-storage reads. These could be parallelized with futures::join_all or similar.

alloy.workspace = true
ajj.workspace = true
tokio.workspace = true
tokio-stream = "0.1"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Claude Code]

Nit: tokio-stream and tokio-util specify exact versions ("0.1", "0.7") rather than using workspace dependencies. Inconsistent with the rest of the manifest.

);
}

let base_fee_per_blob_gas = vec![0; base_fee_per_gas.len()];
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Claude Code]

Nit: base_fee_per_blob_gas and blob_gas_used_ratio are allocated as vec![0; ...] even though Signet has no blob transactions. Could be Vec::new() or empty if FeeHistory permits.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants