feat: add signet-rpc-storage crate with ETH RPC endpoints#75
feat: add signet-rpc-storage crate with ETH RPC endpoints#75
Conversation
| @@ -0,0 +1,578 @@ | |||
| //! Integration tests for the `signet-rpc-storage` ETH RPC endpoints. | |||
There was a problem hiding this comment.
[Claude Code]
Test coverage gap: eth_call and eth_estimateGas are the only two supported endpoints without integration tests. They require full EVM state setup (deployed contract bytecode, proper storage) to exercise meaningfully. Worth adding as a follow-up.
The current tests cover all 22 other supported endpoints plus error paths.
|
[Claude Code] Review SummaryWhat's here
Issues flagged (inline comments)
Codebase health
|
Add the foundational scaffolding for the signet-rpc-storage crate, which provides an Ethereum JSON-RPC server backed by signet-storage's unified storage backend, independent of reth's FullNodeComponents. This includes: - Workspace dependency additions (signet-storage, signet-cold, signet-hot, signet-storage-types) - StorageRpcCtx context struct with Arc<Inner> pattern - BlockTags atomic block tag tracker for Latest/Safe/Finalized - Block ID and block tag resolution utilities - Stub eth module (endpoints to be added in follow-up) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Implement all ETH namespace JSON-RPC endpoints backed by cold/hot storage instead of reth. Converts eth.rs placeholder into eth/ directory module with error types, helpers, and 24 supported endpoint handlers: - Simple queries: blockNumber, chainId - Block queries: getBlockByHash/Number, getBlockReceipts, headers - Transaction queries: getTransactionByHash, getTransactionReceipt, raw txs - Account state (hot storage): getBalance, getStorageAt, getCode, getTransactionCount - EVM execution: call, estimateGas (via signet-evm/trevm) - Transaction submission: sendRawTransaction (via TxCache) - Logs: getLogs with bloom filter matching Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add 23 integration tests covering all endpoint categories: simple queries, block/transaction lookups, account state, logs, and error cases. Tests exercise the router through the axum service layer using tower's oneshot(). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- send_raw_transaction: log warning on forwarding failure instead of silently discarding the error - get_logs: reject reversed block ranges (from > to) with an explicit error instead of silently returning empty results - build_receipt_envelope: remove catch-all arm so new TxType variants from alloy produce a compile error instead of a runtime panic Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Regular txs execute before system txs, not the other way around. Drive-by from #74 (comment) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
4341320 to
601c156
Compare
- Extract `resolve_evm_block` method on `StorageRpcCtx` to deduplicate the block resolution + header fetch + revm db creation shared by `call()` and `estimate_gas()`. Resolves headers directly (by hash or by tag→number) to avoid redundant cold storage lookups. - Replace glob import `use endpoints::*` with explicit imports. - Remove unused `revm_state()` method from `StorageRpcCtx`. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ounds - Move `resolve_block_id` and `resolve_block_number_or_tag` from free functions in resolve.rs to `resolve_block_id` and `resolve_block_tag` methods on `StorageRpcCtx`. This eliminates repeated `ctx.tags()` and `ctx.cold()` threading at every call site. - `resolve_block_tag` returns `u64` directly (infallible) instead of `Result`, simplifying callers like `get_logs`. - Remove `H::RoTx: Send + Sync + 'static` bounds from all endpoint functions, router, and ctx methods — the trait already provides these. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace the bare `rpc_gas_cap` constructor parameter with a `StorageRpcConfig` struct that bundles all RPC configuration. This moves `max_blocks_per_filter` from a hard-coded constant to a configurable value, adds `max_logs_per_response` enforcement in `eth_getLogs`, and pre-creates a tracing semaphore for future debug endpoint concurrency limiting. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| let mut all_logs = Vec::new(); | ||
|
|
||
| for (chunk_start, chunk_end) in | ||
| BlockRangeInclusiveIter::new(from..=to, MAX_HEADERS_RANGE) |
There was a problem hiding this comment.
I'm guessing the intent here is to return batches of 1000, but the way BlockRangeInclusiveIter is implemented, we'd need to pass MAX_HEADERS_RANGE - 1 as the step value.
I think it might be worth changing the BlockRangeInclusiveIter to treat the step arg the same way as the std lib - i.e. to make it mean "batch size" and not increment it by 1 internally. That would mean we'd also need to update BlockRangeInclusiveIter::next() to have let end = (start + self.step - 1).min(self.end);
There was a problem hiding this comment.
this is basically ported from reth, so i'll take a deeper look here
| // --------------------------------------------------------------------------- | ||
|
|
||
| pub(crate) async fn not_supported() -> ResponsePayload<(), ()> { | ||
| ResponsePayload::internal_error_message(Cow::Borrowed("Method not supported.")) |
There was a problem hiding this comment.
This should probably be method_not_found() rather than an internal error, so callers don't retry.
Add gas oracle, filters, subscriptions, debug tracing, and signet namespaces to signet-rpc-storage. Port 15 endpoints from the old reth-backed signet-rpc crate to the storage-backed architecture. New modules: - gas_oracle: cold-storage gas price oracle (suggest_tip_cap) - interest/: filter manager, subscription manager, block notifications - debug/: traceBlockByNumber, traceBlockByHash, traceTransaction - signet/: sendOrder, callBundle Wired eth endpoints: gasPrice, maxPriorityFeePerGas, feeHistory, newFilter, newBlockFilter, uninstallFilter, getFilterChanges, getFilterLogs, subscribe, unsubscribe. Integration tests cover gas/fee queries, filter lifecycle, and debug tracing with noop tracer. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Use `SealedHeader` with `.hash()` / `.into_inner()` instead of `header.hash_slow()` - Use `RecoveredTx` (pre-recovered sender) instead of manual `recover_sender` calls - Use `ColdReceipt` with per-tx `gas_used` instead of computing deltas from cumulative gas - Delegate `get_logs` to cold storage instead of manual bloom filtering and block iteration - Remove `BlockRangeInclusiveIter`, `collect_matching_logs`, `build_receipt_from_parts`, and `recover_sender` helpers - Simplify `build_rpc_transaction` and `build_receipt` to be infallible Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Addresses PR review feedback: - resolve_block_id now uses hot HotDbRead::get_header_number instead of cold storage, making it synchronous and avoiding async overhead - Add resolve_header for direct header fetches from hot storage, eliminating the double header lookup in header_by - Change not_supported() to return method_not_found() (JSON-RPC -32601) instead of internal_error (-32603) - Update ResolveError to use Storage/Db variants instead of Cold - Update tests to write headers to both hot and cold storage Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Swap return types to use RpcBlock/RpcReceipt/RpcTransaction/RpcHeader type aliases, rename tx_by_block_and_index to match rpc naming, fix not_supported error message, and split call into run_call + call. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Align rpc-storage behavioral semantics with the rpc crate: - subscribe: require filter for Logs, reject filter for NewHeads via TryFrom impl - send_raw_transaction: return Result from spawned task instead of swallowing errors - uninstall_filter/unsubscribe: add HandlerCtx and wrap in spawn_blocking Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…to 0.5 resolve_evm_block previously mapped Pending to Latest without modifying the header, causing eth_estimateGas (which defaults to Pending) and eth_call with explicit Pending to see wrong block.number, timestamp, and base_fee. Now synthesizes a next-block header matching signet-rpc's block_cfg() behavior. Also refactors callBundle to use resolve_evm_block instead of duplicating the pending header logic inline, and passes max_logs to cold.get_logs() for early termination (signet-cold 0.5 API). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| } | ||
|
|
||
| /// Get the finalized block number. | ||
| pub fn finalized(&self) -> u64 { |
There was a problem hiding this comment.
is there a way to set all of these atomically at once
prestwich
left a comment
There was a problem hiding this comment.
are there ways that we can DRY endpoint logic or definitions?
| /// assert_eq!(tags.latest(), 101); | ||
| /// ``` | ||
| #[derive(Debug, Clone)] | ||
| pub struct BlockTags { |
There was a problem hiding this comment.
can we add syncing status information to this struct
crates/rpc-storage/src/lib.rs
Outdated
| #![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))] | ||
|
|
||
| mod config; | ||
| pub use config::StorageRpcConfig; |
There was a problem hiding this comment.
add newlines after each pub use
crates/rpc-storage/src/lib.rs
Outdated
| H: signet_hot::HotKv + Send + Sync + 'static, | ||
| <H::RoTx as signet_hot::model::HotKvRead>::Error: trevm::revm::database::DBErrorMarker, | ||
| { | ||
| ajj::Router::new().merge(eth::eth()).merge(debug::debug()).merge(signet::signet()) |
There was a problem hiding this comment.
these are not appropriately namespaced. we need to nest them with a prefix, as done in signet-rpc
crates/rpc-storage/src/lib.rs
Outdated
| } | ||
|
|
||
| /// Instantiate the `signet` API router. | ||
| pub fn signet<H>() -> ajj::Router<StorageRpcCtx<H>> |
There was a problem hiding this comment.
these functions do not need to exist. we only need the main fully built router at crate root
| @@ -0,0 +1,85 @@ | |||
| //! Configuration for the storage-backed RPC server. | |||
There was a problem hiding this comment.
make a config/ module folder containing the contents of ctx.rs gas_oracle.rs config.rs and resolve.rs
| H: HotKv + Send + Sync + 'static, | ||
| <H::RoTx as HotKvRead>::Error: DBErrorMarker, | ||
| { | ||
| let timeout = bundle.bundle.timeout.unwrap_or(1000); |
There was a problem hiding this comment.
the 1000 should be a config parameter
| }; | ||
|
|
||
| let task = |hctx: HandlerCtx| async move { | ||
| hctx.spawn(async move { tx_cache.forward_order(order).await.map_err(|e| e.to_string()) }); |
There was a problem hiding this comment.
is this error swallowed?
| /// The caller constructs and sends these via a | ||
| /// [`tokio::sync::broadcast::Sender`]. | ||
| #[derive(Debug, Clone)] | ||
| pub struct NewBlockNotification { |
There was a problem hiding this comment.
how and where are resources managed for this? are these ever cloned?
| @@ -0,0 +1,22 @@ | |||
| //! Filter and subscription management for block/log notifications. | |||
There was a problem hiding this comment.
make comprehensive documentation for this module
There was a problem hiding this comment.
include details about resource usage expectations, etc
crates/rpc-storage/Cargo.toml
Outdated
| thiserror.workspace = true | ||
| serde.workspace = true | ||
| dashmap = "6.1.0" | ||
| reth-rpc-eth-api.workspace = true |
There was a problem hiding this comment.
where is this used? how can we remove it
| { | ||
| let opts = response_tri!(opts.ok_or(DebugError::InvalidTracerConfig)); | ||
|
|
||
| let _permit = ctx.acquire_tracing_permit().await; |
There was a problem hiding this comment.
where is this permit dropped?
| let t = trevm.fill_tx(tx); | ||
| let frame; | ||
| (frame, trevm) = response_tri!(crate::debug::tracer::trace(t, &opts, tx_info)); | ||
| frames.push(TraceResult::Success { result: frame, tx_hash: Some(*tx.tx_hash()) }); |
There was a problem hiding this comment.
we are collecting these frames only to immediately serailize them. is there a way we could feed these to the serializer via an iterator instead?
There was a problem hiding this comment.
the same question applies to other instances where we return vecs
| #[derive(Debug, Clone, thiserror::Error)] | ||
| pub enum DebugError { | ||
| /// Cold storage error. | ||
| #[error("cold storage: {0}")] |
There was a problem hiding this comment.
can we avoid string-typing these?
There was a problem hiding this comment.
we should also avoid leaking the full error into the API return. ensure that there is a log of the full error, and the API returns a simple description
prestwich
left a comment
There was a problem hiding this comment.
add many more comments throughout the endpoint code, explaining what is happening and why
- Restructure into config/ module folder (ctx, resolve, gas_oracle, rpc_config) - Remove standalone pub fn wrappers, use .nest() for namespace prefixing - Remove reth-rpc-eth-api dependency, use alloy Network trait type aliases - Change not_supported to method_not_found, add concrete SignetError variants - Sanitize DebugError Display output, add tracing::warn at error call sites - Add fire-and-forget error logging for send_raw_transaction and send_order - Add default_bundle_timeout_ms config, SyncStatus struct, BlockTags::update_all - Add comprehensive interest/ module docs, endpoint doc comments, permit docs - Extract hot_reader_at_block helper for account state endpoints - Add FUTURE-EVALUATION.md documenting Vec collection constraint Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Bump ajj from 0.3.4 to 0.5.0 and adapt all call sites to the new API: - ResponsePayload::Success/Failure replaced with ResponsePayload(Ok/Err) - Subscription task rewritten to use ctx.notify() instead of raw channel access via the removed notifications() method Leverage relaxed RpcSend bounds for lazy serialization: - LazyReceipts: serializes receipts inline from raw ColdReceipt + RecoveredTx data without intermediate Vec<RpcReceipt> - In-housed BlockTransactions and RpcBlock: serialize full transactions or hashes lazily from Vec<RecoveredTx>, hardcode empty uncles Refactor build_receipt and build_rpc_transaction to take references, cloning only where needed (logs Vec, RecoveredTx for into_inner). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Apply the same ajj 0.5.0 migration to the canonical rpc crate: - ResponsePayload::Success/Failure replaced with ResponsePayload(Ok/Err) - Subscription task rewritten to use ctx.notify() instead of the removed notifications() channel accessor Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ruct Add SubscriptionNotification and SubscriptionParams structs with derived Serialize impls, replacing hand-crafted json! macro usage in both rpc and rpc-storage subscription tasks. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…rChanges Replace `cold.get_logs()` with `cold.stream_logs()` in both endpoints to gain deadline enforcement (configurable, default 10s), dedicated concurrency control (separate semaphore, max 8 concurrent streams), and reorg detection via anchor hash. Bump signet-storage crates to 0.6.0. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| /// tips, and returns the value at `gas_oracle_percentile`. | ||
| /// | ||
| /// Returns `U256::ZERO` if no transactions are found in the range. | ||
| pub(crate) async fn suggest_tip_cap( |
There was a problem hiding this comment.
where is this used? these reads may be expensive. do we cache these?
| DebugError::BlockNotFound(id) | ||
| })); | ||
|
|
||
| let (header, txs) = response_tri!( |
There was a problem hiding this comment.
could we use the resolve context method here?
|
|
||
| // State BEFORE this block. | ||
| let db = | ||
| response_tri!(ctx.revm_state_at_height(block_num.saturating_sub(1)).map_err(|e| { |
There was a problem hiding this comment.
could this use the reoslve method as well?
| // Not Supported | ||
| // --------------------------------------------------------------------------- | ||
|
|
||
| pub(crate) async fn not_supported() -> ResponsePayload<(), ()> { |
There was a problem hiding this comment.
we can remove this, as this is default behavior when a method is unregistered.
| // --- | ||
| // Unsupported methods | ||
| // --- | ||
| .route("protocolVersion", not_supported) |
There was a problem hiding this comment.
remove all non-supported. leave a comment with a list of non-supported methods. move supported methods up
|
|
||
| ## Supported Methods | ||
|
|
||
| - Block queries: `eth_blockNumber`, `eth_getBlockByHash`, `eth_getBlockByNumber`, etc. |
There was a problem hiding this comment.
update these. instead list unsupported methods and why
| let guard = span.enter(); | ||
| // This future checks if the notification buffer is non-empty and | ||
| // waits for the sender to have some capacity before sending. | ||
| let permit_fut = async { |
There was a problem hiding this comment.
we're going to add an api to ajj to get permits. this code should be restored to prior state then modified to use SubscriptionNotificstion
| /// JSON-RPC subscription notification envelope. | ||
| #[derive(serde::Serialize)] | ||
| struct SubscriptionNotification<'a> { | ||
| jsonrpc: &'static str, |
There was a problem hiding this comment.
remove this. it's always known, and never needs to be a struct prop.
| // Drain one buffered item per iteration, checking for | ||
| // cancellation between each send. | ||
| if let Some(item) = notif_buffer.pop_front() { | ||
| let notification = SubscriptionNotification { |
There was a problem hiding this comment.
we're going to re-add permitting to ajj. this code should be revised to the previous permit-acquisition behavior, modified to use SubscriptionNotification
Move response/serialization types (EmptyArray, BlockTransactions, RpcBlock, LazyReceipts) and type aliases from eth/endpoints.rs into eth/types.rs, and parameter types (TraceBlockParams, TraceTransactionParams) from debug/endpoints.rs into debug/types.rs. Consolidate duplicate RpcTransaction/RpcReceipt aliases from eth/helpers.rs into eth/types.rs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace the one-notification-per-iteration drain pattern with permit_many batching for better throughput and fairness. Bumps ajj to 0.6.2 for permit_many/notification_capacity support. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
| /// The resolved block header. | ||
| pub header: alloy::consensus::Header, | ||
| /// The revm database at the resolved height. | ||
| pub db: trevm::revm::database::State<Db>, |
There was a problem hiding this comment.
why state? do we need that wrapper for anything?
…tracking Move NodeStatus from signet-node-types to signet-node, move ServeConfig/RpcServerGuard from signet-rpc to signet-node, and remove the journal hash tracking system (JournalHashes table, journal module, GENESIS_JOURNAL_HASH, and all related parameters/methods). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
prestwich
left a comment
There was a problem hiding this comment.
[Claude Code]
PR Review: signet-rpc-storage crate
The crate is well-organized and test coverage is solid. However, there are two likely bugs (leaked thread in SubCleanerTask, unsubscribe is a no-op), a conformance gap (eth_syncing), and a design issue with spawn_blocking being used for async work. See inline comments for details.
Design-level observations (not tied to a specific line):
StorageRpcConfig should have a builder — Per project conventions: "A struct is considered complex enough to warrant a builder if it has more than 4 fields." StorageRpcConfig has 11 fields.
Duplicated Either enum — filters.rs and subs.rs both define an Either enum with nearly identical semantics but different variants. Should be unified or renamed (FilterItem / SubscriptionItem).
Duplicated buffer/output types — FilterOutput and SubscriptionBuffer are structurally identical VecDeque-backed enums with Log/Block variants, len(), is_empty(), extend(), pop_front(). This could be a single generic type.
Error handling via String is inconsistent — Most endpoints return Result<T, String>, losing structured error info, while run_call/estimate_gas/create_access_list return ResponsePayload<T, CallErrorData>, and debug endpoints return ResponsePayload<T, DebugError>. The crate defines three good error types (EthError, DebugError, SignetError) but most endpoints discard them with .map_err(|e| e.to_string()).
| } | ||
| } | ||
| }); | ||
| } |
There was a problem hiding this comment.
[Claude Code]
Bug: SubCleanerTask::spawn never terminates when SubscriptionManager is dropped.
Compare with the correct pattern in FilterCleanTask::spawn (filters.rs:288-299), which checks Weak::upgrade() and breaks when it returns None. This version skips the retain but loops forever, leaking an OS thread per SubscriptionManager drop.
fn spawn(self) {
std::thread::spawn(move || {
loop {
std::thread::sleep(self.interval);
if let Some(inner) = self.inner.upgrade() {
inner.tasks.retain(|_, task| !task.is_cancelled());
}
// missing: else { break; }
}
});
}| debug!(%id, "registered new subscription"); | ||
|
|
||
| Some(id) | ||
| } |
There was a problem hiding this comment.
[Claude Code]
Bug: subscribe() never inserts the CancellationToken into self.tasks.
The token is created and cloned into the SubscriptionTask, but never stored in the DashMap. This means unsubscribe() will always return false (the remove finds nothing), and the SubCleanerTask has nothing to clean. The subscription can never be cancelled by the server.
// Should be:
self.tasks.insert(id, token.clone());| // Unsupported methods | ||
| // --- | ||
| .route("protocolVersion", not_supported) | ||
| .route("syncing", not_supported) |
There was a problem hiding this comment.
[Claude Code]
eth_syncing should return false (when synced) or a sync-status object (when syncing), not method_not_found. You already have BlockTags::sync_status() and SyncStatus wired up for exactly this. Any client that calls eth_syncing — virtually all of them do — gets a confusing error instead of the expected false.
| Ok(tip + U256::from(base_fee)) | ||
| }; | ||
|
|
||
| await_handler!(@option hctx.spawn_blocking(task)) |
There was a problem hiding this comment.
[Claude Code]
gas_price, max_priority_fee_per_gas, and fee_history all use hctx.spawn_blocking for tasks that contain .await calls (cold storage reads). spawn_blocking is for CPU-bound work — using it for IO-bound async work means the task runs on a blocking thread pool that doesn't poll futures efficiently. These should use hctx.spawn or hctx.spawn_with_ctx.
| if cumulative_gas >= threshold { | ||
| break; | ||
| } | ||
| tx_idx += 1; |
There was a problem hiding this comment.
[Claude Code]
cumulative_gas and tx_idx are never reset between percentile iterations. This is correct for sorted-ascending traversal, but once tx_idx increments past tx_gas_and_tip.len(), the while loop exits immediately for all subsequent percentiles, and tx_gas_and_tip.get(tx_idx) returns None → unwrap_or_default() → zero. Higher percentiles silently return 0 for blocks with few transactions.
| /// Either type for filter outputs. | ||
| #[derive(Debug, Clone, PartialEq, Eq, serde::Serialize)] | ||
| #[serde(untagged)] | ||
| #[allow(dead_code)] |
There was a problem hiding this comment.
[Claude Code]
Nit: #[allow(dead_code)] on Either, FilterOutput impls, and is_filter method. If they're not used, remove them. If they're part of the intended API, use them. allow(dead_code) on internal types is a code smell suggesting premature API surface.
| ) | ||
| .map_err(|e| e.to_string())?; | ||
|
|
||
| let tx = tx.ok_or("receipt found but transaction missing")?.into_inner(); |
There was a problem hiding this comment.
[Claude Code]
Nit: .ok_or("receipt found but transaction missing") uses a string literal error. This should be a proper error variant on EthError.
| let base_fee = header.base_fee_per_gas.unwrap_or_default(); | ||
| let block_num = start + offset as u64; | ||
|
|
||
| let txs = cold.get_transactions_in_block(block_num).await?; |
There was a problem hiding this comment.
[Claude Code]
Nit: This iterates headers sequentially, making one get_transactions_in_block call per block. For the default 20-block lookback, that's 20 sequential cold-storage reads. These could be parallelized with futures::join_all or similar.
| alloy.workspace = true | ||
| ajj.workspace = true | ||
| tokio.workspace = true | ||
| tokio-stream = "0.1" |
There was a problem hiding this comment.
[Claude Code]
Nit: tokio-stream and tokio-util specify exact versions ("0.1", "0.7") rather than using workspace dependencies. Inconsistent with the rest of the manifest.
| ); | ||
| } | ||
|
|
||
| let base_fee_per_blob_gas = vec![0; base_fee_per_gas.len()]; |
There was a problem hiding this comment.
[Claude Code]
Nit: base_fee_per_blob_gas and blob_gas_used_ratio are allocated as vec![0; ...] even though Signet has no blob transactions. Could be Vec::new() or empty if FeeHistory permits.
Summary
signet-rpc-storagecrate providing Ethereum JSON-RPC endpoints backed bysignet-storage(hot + cold), independent of reth'sFullNodeComponentsStorageRpcCtxwrapsUnifiedStorage,BlockTags, system constants, optionalTxCache, and RPC gas capBlockTagsprovides atomic latest/safe/finalized block tracking with sync/async resolutionblockNumber,chainIdgetBlockByHash/Number,getBlockTransactionCount*,getBlockReceipts,getBlockHeader*getTransactionByHash,getRawTransactionByHash,*ByBlockAndIndex,getTransactionReceiptgetBalance,getStorageAt,getTransactionCount,getCodecall,estimateGasviasignet-evm/trevmsendRawTransactionviaTxCachegetLogswith bloom filter matchingsignet-storage0.3.0 from crates.ioTest plan
cargo clippy -p signet-rpc-storage --all-features --all-targets— cleancargo clippy -p signet-rpc-storage --no-default-features --all-targets— cleancargo +nightly fmt— cleancargo t -p signet-rpc-storage— 33 tests passcargo doc -p signet-rpc-storage— docs build successfully🤖 Generated with Claude Code