Skip to content

Comments

feat(sequencer): catchup from base#3057

Open
julienrbrt wants to merge 49 commits intomainfrom
julien/catchup-base
Open

feat(sequencer): catchup from base#3057
julienrbrt wants to merge 49 commits intomainfrom
julien/catchup-base

Conversation

@julienrbrt
Copy link
Member

@julienrbrt julienrbrt commented Feb 9, 2026

Closes: #3016

Summary by CodeRabbit

  • New Features

    • Sequencer catch-up mode to replay DA forced-inclusion transactions after downtime; based-sequencer advances execution finalization during catch-up and after new blocks.
    • Forced-inclusion verification now accepts explicit DA heights to improve catch-up correctness.
  • Bug Fixes

    • DA-only missing blocks now record timestamps for later timestamp-based updates during sync.
  • Documentation

    • Added detailed catch-up mode spec, examples, security notes, and updated verification flow.
  • Configuration

    • Improved decoding for node role flags (aggregator / based_sequencer / light).
  • Tests

    • Extensive unit and end-to-end coverage for catch-up, forced inclusion, and based-sequencer scenarios.

@github-actions
Copy link
Contributor

github-actions bot commented Feb 9, 2026

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedFeb 25, 2026, 10:10 AM

@claude
Copy link
Contributor

claude bot commented Feb 9, 2026

Claude finished @julienrbrt's task in 3m 57s —— View job


PR Review: feat(sequencer): catchup from base

  • Gathering context and reading changed files
  • Analyzing core catch-up logic (pkg/sequencers/single/sequencer.go)
  • Analyzing syncer changes (block/internal/syncing/syncer.go)
  • Reviewing DA interface additions
  • Reviewing executor changes
  • Reviewing tests
  • Compiling final review

Overall Assessment

This PR tackles a real and important problem — sequencer consistency after downtime — and the design is sound. The epoch-based catch-up state machine is well-structured, the tests are thorough, and the syncer's epoch-start-alignment fix is critical and correct. Below are findings ranging from a blocking bug to minor style notes.


Critical

1. updateCatchUpState permanently disables catch-up detection on transient DA error

File: pkg/sequencers/single/sequencer.go:483-535

func (c *Sequencer) updateCatchUpState(ctx context.Context) {
    if c.catchUpState.Load() != catchUpUnchecked {
        return
    }
    // Optimistically mark as done; overridden to catchUpInProgress below if
    // catch-up is actually needed.
    c.catchUpState.Store(catchUpDone)   // ← stored BEFORE calling GetLatestDAHeight

    epochSize := c.genesis.DAEpochForcedInclusion
    if epochSize == 0 {
        return
    }
    ...
    latestDAHeight, err := c.daClient.GetLatestDAHeight(ctx)
    if err != nil {
        c.logger.Warn().Err(err).
            Msg("failed to get latest DA height for catch-up detection, skipping check")
        return   // ← but catchUpDone is ALREADY stored; catch-up is now permanently skipped
    }

Because catchUpDone is stored before the network call, any transient DA error (timeout, connection reset, etc.) at startup permanently bypasses catch-up detection for the sequencer's entire lifetime. The function's early-return guard (if c.catchUpState.Load() != catchUpUnchecked) then prevents any re-check. A sequencer that was offline for many epochs may resume producing normal blocks instead of catch-up blocks, breaking consistency with base-sequencing nodes.

The fix is to only store catchUpDone after GetLatestDAHeight succeeds and you've confirmed no catch-up is needed (or on the epochSize == 0 fast path). On error, leave catchUpState as catchUpUnchecked so the next epoch fetch can retry. Fix this →


High

2. currentDAEndTime not guarded against backward DA timestamps

File: pkg/sequencers/single/sequencer.go:447-449 vs. pkg/sequencers/based/sequencer.go:113-115

The BasedSequencer guards against DA clock drift:

if daEndTime.After(s.currentDAEndTime) {
    s.currentDAEndTime = daEndTime
}

But the single Sequencer unconditionally overwrites:

if !forcedTxsEvent.Timestamp.IsZero() {
    c.currentDAEndTime = forcedTxsEvent.Timestamp.UTC()
}

If a later DA epoch carries an earlier timestamp (a realistic scenario with DA chain reorgs or light node clock drift), the block timestamp calculated in GetNextBatch will go backward between epochs during catch-up. Most EVM execution layers reject blocks with non-monotonic timestamps, which would halt catch-up and the sequencer. Apply the same monotonicity guard used by BasedSequencer.

3. Syncer's VerifyForcedInclusionTxs uses currentState.DAHeight indirectly despite the epoch-start fix

File: block/internal/syncing/syncer.go:755-756

if event.Source == common.SourceDA {
    if err := s.VerifyForcedInclusionTxs(ctx, currentState.DAHeight, data); err != nil {

VerifyForcedInclusionTxs receives currentState.DAHeight, which is the state before applying this block. After the fix in commit 5e392ba, newState.DAHeight (the epoch-start of event.DaHeight) is computed correctly but only written to the store after verification. This means verification happens against the previous state's DA height, not the aligned epoch start — which could still misalign for the first catch-up block of a new epoch. Consider whether VerifyForcedInclusionTxs should be called after updating newState.DAHeight (or passed the epoch-start value directly).


Medium

4. fiRetriever bootstrap-reinitialization on each call before checkpoint is set

File: pkg/sequencers/single/sequencer.go:192-203

if daHeight > 0 && c.checkpoint.DAHeight == 0 {
    c.checkpoint = &seqcommon.Checkpoint{DAHeight: daHeight, TxIndex: 0}
    if c.fiRetriever != nil {
        c.fiRetriever.Stop()    // stops background prefetcher goroutine
    }
    c.fiRetriever = block.NewForcedInclusionRetriever(...) // starts a new one
}

This path fires correctly only once (after the first call sets checkpoint.DAHeight to non-zero). However, there's no guard ensuring checkpoint.DAHeight persists across GetNextBatch calls if fetchNextDAEpoch returns (0, nil) (e.g., ErrHeightFromFuture on first call). In that case cachedForcedInclusionTxs stays nil, the checkpoint update block at line 315 doesn't trigger (since daHeight == 0), and checkpoint.DAHeight remains at the daHeight set in the bootstrap path — so this won't actually re-fire. But it is fragile and worth adding a clear comment explaining why the checkpoint.DAHeight == 0 check is safe here.

5. executor.go: SetFinal uses e.ctx rather than the caller's ctx

File: block/internal/executing/executor.go:639

if err := e.exec.SetFinal(e.ctx, newHeight); err != nil {

The surrounding broadcast calls use e.ctx (intentional for lifecycle management), but SetFinal is a synchronous state-advancing call that should respect the caller's deadline. Since ProduceBlock is always called with e.ctx in practice this is safe, but it prevents future callers from injecting timeout context. Minor but cosmetically inconsistent with the function signature's ctx parameter.


Low / Nitpick

6. Missing tracing tests for GetLatestDAHeight

File: block/internal/da/tracing_test.go

Every other traced method (Submit, Retrieve, Get) has both success and error test cases. GetLatestDAHeight has neither. The tracing adds a da.latest_height span attribute that is untested. Should add two tests mirroring the existing pattern.

7. forcedInclusionRetriever log message double period

File: block/internal/da/forced_inclusion_retriever.go:232

Msg("Failed to retrieve DA epoch.. retrying next iteration")

Double period .. should be ..

8. NewForcedInclusionRetriever parameter order inconsistency

block/internal/da/forced_inclusion_retriever.go:44 takes (client, logger, cfg, ...) but block/public.go:86 exposes (client, cfg, logger, ...). The wrapper correctly adapts the call, so this is not a bug, but it is a minor footgun for someone reading both files side-by-side.

9. updateCatchUpState: hardcoded 1 for epoch comparison is opaque

File: pkg/sequencers/single/sequencer.go:516

if missedEpochs <= 1 {

A named constant or at least a comment like // within one epoch of head — normal operation would make the policy explicit. The number 1 here is a design choice, not an obvious constant.


What's Well Done

  • The epoch-start alignment in the syncer (commit 5e392ba) correctly ensures all blocks from the same DA epoch verify against the same epoch boundary — fixing the earlier one-per-block advancement bug.
  • The checkpoint persistence design handles crash-mid-epoch correctly by tracking TxIndex separately from DAHeight.
  • The catch-up detection being a one-shot check (with exit via ErrHeightFromFuture) is the right trade-off: cheap and correct for the steady state.
  • The test coverage for the sequencer catch-up scenarios is thorough and covers the key edge cases (detection, mempool skipping, DA timestamps, exit conditions).
  • The ADR update (adr-019) is clear and the example walks through the recovery flow well.
    | Branch

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @julienrbrt, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request implements a critical "catch-up" feature for the sequencer, designed to enhance its resilience and consistency following periods of inactivity or downtime. By intelligently replaying missed Data Availability (DA) epochs, the sequencer can resynchronize its state, ensuring that all forced inclusion transactions are processed in the correct order and with accurate timestamps, thereby maintaining a consistent view of the blockchain's history, even when operating in a decentralized environment. This mechanism allows the sequencer to seamlessly transition back to normal operation without compromising data integrity.

Highlights

  • Sequencer Catch-up Mechanism: Introduced a new mechanism allowing the sequencer to "catch up" on missed Data Availability (DA) epochs after extended downtime.
  • Forced Inclusion Priority: During catch-up, the sequencer temporarily processes only forced inclusion transactions, ensuring consistency with base sequencing nodes and temporarily skipping mempool transactions.
  • DA Height Synchronization: Modified syncer.go to incrementally advance the DAHeight by one epoch during catch-up, preventing issues with forced inclusion transaction verification.
  • Timestamp Alignment: Block timestamps generated during catch-up are now aligned with the DA epoch's end timestamp for historical accuracy.
  • Comprehensive Testing: Added extensive unit tests covering various catch-up scenarios, including detection, mempool skipping, timestamp usage, exit conditions, and multi-epoch replay.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • block/internal/syncing/syncer.go
    • Updated the logic for DAHeight updates to handle large discrepancies between event.DaHeight and newState.DAHeight. When a significant gap (more than one epoch) is detected, newState.DAHeight is now advanced by exactly one epoch per block, rather than jumping directly to event.DaHeight. This ensures correct verification of forced inclusion transactions during sequencer catch-up.
    • Added detailed comments explaining the new DAHeight update strategy and its importance for catch-up blocks and forced inclusion verification.
  • pkg/sequencers/single/sequencer.go
    • Added catchingUp (boolean) and currentDAEndTime (time.Time) to the Sequencer struct to manage the catch-up state and store the DA epoch's end timestamp.
    • Modified GetNextBatch to conditionally skip fetching transactions from the mempool when the sequencer is in catchingUp mode. This ensures that only forced inclusion transactions are included in batches during catch-up.
    • Updated GetNextBatch to use sequencer.currentDAEndTime as the block timestamp when catchingUp is true, aligning block timestamps with the historical DA epoch.
    • Introduced a public IsCatchingUp() method to query the sequencer's current catch-up status.
    • Modified fetchNextDAEpoch to store the forcedTxsEvent.Timestamp in currentDAEndTime and to call a new updateCatchUpState method to manage the catchingUp flag. It also now exits catch-up mode if StatusHeightFromFuture is returned by the DA client.
    • Implemented a new private method updateCatchUpState that determines whether the sequencer should be in catch-up mode based on the timestamp of the fetched DA epoch. It enters catch-up if the epoch timestamp is significantly in the past (more than one epoch's wall-clock duration) and exits if it's recent.
  • pkg/sequencers/single/sequencer_test.go
    • Added a comprehensive suite of tests (TestSequencer_CatchUp_...) to validate the new catch-up functionality. These tests cover:
    • Detection of old epochs and entering catch-up mode.
    • Skipping mempool transactions during catch-up.
    • Using DA epoch timestamps for blocks during catch-up.
    • Exiting catch-up mode when the DA head is reached (either a recent epoch or HeightFromFuture).
    • Scenarios where catch-up is not triggered (recent epoch, no forced inclusion configured).
    • Multi-epoch replay scenarios.
    • Correct advancement of the internal checkpoint (DAHeight) during catch-up.
Activity
  • The pull request addresses and closes issue https://github.com/evstack/ev-node/issues/3016.
  • The author indicates that this work builds upon a previous attempt by "Opus", aiming for improvement.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@codecov
Copy link

codecov bot commented Feb 9, 2026

Codecov Report

❌ Patch coverage is 65.38462% with 45 lines in your changes missing coverage. Please review.
✅ Project coverage is 60.98%. Comparing base (67e18bd) to head (9246d4a).

Files with missing lines Patch % Lines
block/internal/syncing/syncer.go 44.00% 13 Missing and 1 partial ⚠️
block/internal/da/tracing.go 0.00% 10 Missing ⚠️
block/internal/da/client.go 0.00% 9 Missing ⚠️
block/internal/executing/executor.go 0.00% 4 Missing and 2 partials ⚠️
pkg/sequencers/single/sequencer.go 91.04% 4 Missing and 2 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3057      +/-   ##
==========================================
+ Coverage   60.91%   60.98%   +0.06%     
==========================================
  Files         113      113              
  Lines       11617    11724     +107     
==========================================
+ Hits         7077     7150      +73     
- Misses       3742     3772      +30     
- Partials      798      802       +4     
Flag Coverage Δ
combined 60.98% <65.38%> (+0.06%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a robust catch-up mechanism for the sequencer, designed to handle restarts after extended downtime. While the overall approach for consuming and verifying catch-up blocks in the syncer, including the incremental advancement of DAHeight, is well-implemented and tested, the implementation of catch-up mode in the single sequencer has significant flaws. Specifically, it produces non-monotonic block timestamps when multiple blocks are generated for a single DA epoch or when empty epochs are encountered, which will likely cause the execution layer to reject blocks and halt the chain. Additionally, there is a data race on the new catch-up state fields due to a lack of synchronization primitives. Minor suggestions for code clarity and testing experience were also noted.

@tac0turtle
Copy link
Contributor

i believe this pr justifies an adr

Copy link
Contributor

@alpe alpe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work

@julienrbrt julienrbrt marked this pull request as ready for review February 23, 2026 20:38
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 23, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Sequencers gain a catch‑up mode: on restart they query the DA head, detect multi‑epoch gaps, replay missed epochs using only DA forced‑inclusion transactions with DA‑derived timestamps, advance execution finalized heights when running as a BasedSequencer, and resume normal sequencing once caught up.

Changes

Cohort / File(s) Summary
DA Interface & Impl
block/internal/da/interface.go, block/internal/da/client.go, block/internal/da/tracing.go, block/internal/da/forced_inclusion_retriever.go
Add GetLatestDAHeight to DA client interface and implementations (with tracing); record timestamp‑only BlockData for DA StatusNotFound entries during forced‑inclusion retrieval.
DA Mocks & Test Doubles
test/mocks/da.go, test/testda/dummy.go, block/internal/da/tracing_test.go, apps/evm/server/force_inclusion_test.go
Generated/updated mock scaffolding and test doubles implementing GetLatestDAHeight to satisfy the new interface in tests.
Sequencer Catch‑Up Logic
pkg/sequencers/single/sequencer.go
Introduce catch‑up state machine, fields, and helpers; detect multi‑epoch gaps via GetLatestDAHeight; skip mempool during catch‑up; compute DA‑based timestamps; reinitialize forced‑inclusion retriever on bootstrap; extensive internal logging.
Sequencer Tests
pkg/sequencers/single/sequencer_test.go
Large suite of new/updated tests exercising catch‑up detection, mempool skipping, DA timestamps, epoch replay, exit conditions, monotonic timestamps; switched test logging to test writer and added helpers.
Syncer: Verification & Height Handling
block/internal/syncing/block_syncer.go, block/internal/syncing/syncer.go, block/internal/syncing/tracing.go, block/internal/syncing/syncer_forced_inclusion_test.go, block/internal/syncing/tracing_test.go
Change VerifyForcedInclusionTxs to accept daHeight uint64 (remove currentState parameter); update call sites, tracing spans, checks, and tests to use explicit DA height.
Executor Finalization
block/internal/executing/executor.go
When BasedSequencer enabled, call SetFinal after syncing initializeState and after producing blocks to advance execution finalized/safe heights.
Configuration Tags
pkg/config/config.go
Add mapstructure tags to NodeConfig boolean fields (Aggregator, BasedSequencer, Light).
Store API
pkg/store/cached_store.go
Remove DeleteStateAtHeight method from CachedStore public surface.
Docs & Changelog
docs/adr/adr-019-forced-inclusion-mechanism.md, CHANGELOG.md
Document sequencer catch‑up mode, update verification flow, security considerations, examples, and add changelog entries for disaster‑recovery behaviors.
End-to-End Tests
test/e2e/evm_force_inclusion_e2e_test.go
Add E2E tests for based‑sequencer catch‑up and baseline flows, setDAStartHeightInGenesis helper, P2P setup tweaks, and re‑enable a previously skipped malicious sequencer test.
Test Utilities
test/mocks/..., test/testda/dummy.go
Generate mock call/expectation helpers for new DA method across test mocks.

Sequence Diagram(s)

sequenceDiagram
    participant Seq as Sequencer
    participant DA as DA Layer
    participant FI as Forced‑Inclusion Retriever
    participant Exec as Execution Layer
    participant P2P as Network

    Seq->>DA: GetLatestDAHeight(ctx)
    DA-->>Seq: latestDAHeight

    alt Gap > 1 epoch
        Seq->>Seq: Enter CatchUp Mode
        loop For each missed epoch
            Seq->>FI: RetrieveForcedInclusion(epoch)
            FI-->>Seq: forcedTxs + epochTimestamp
            Seq->>Exec: ProduceBlock(forcedTxs, timestamp)
            Exec-->>Seq: blockProduced
            Seq->>Exec: SetFinal(height)
            Seq->>Seq: Advance checkpoint
        end
        Seq->>DA: GetLatestDAHeight(ctx)
        DA-->>Seq: latestDAHeight
        alt Caught up
            Seq->>Seq: Exit CatchUp Mode
        else Still behind
            Seq->>Seq: Continue CatchUp
        end
    else Gap <= 1 epoch
        Seq->>Seq: Normal sequencing (mempool allowed)
    end

    Seq->>P2P: Broadcast / Sync
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

T:testing

Suggested reviewers

  • tac0turtle
  • tuxcanfly

"🐇
I woke from a nap, the epochs were long,
I replayed forced‑includes and followed the song.
I set finals, caught up, then danced in the sun—
The chain marched forward; my replay was done. 🥕"

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The description references the linked issue (#3016) with a close directive, but lacks detailed explanation of the change overview, technical approach, and context as required by the template. Expand the description to include an 'Overview' section with background, goals, rationale, and implementation details before merging.
Docstring Coverage ⚠️ Warning Docstring coverage is 31.43% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat(sequencer): catchup from base' is specific, follows semantic commit conventions, and directly describes the main feature being implemented—sequencer catch-up recovery.
Linked Issues check ✅ Passed The PR implements all objectives from issue #3016: sequencer restart detection via GetLatestDAHeight [da/client.go, block/internal/da/interface.go], catch-up mode state machine with epoch-based replay [sequencer.go], forced-inclusion transaction processing with timestamp handling [sequencer.go, syncer.go], and recovery documentation [adr-019].
Out of Scope Changes check ✅ Passed All changes directly support catch-up recovery: DA height retrieval, forced-inclusion verification refactoring, state machine implementation, executor finalization, and comprehensive testing. No unrelated modifications detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch julien/catchup-base

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
pkg/sequencers/single/sequencer.go (1)

483-504: ⚠️ Potential issue | 🟠 Major

Catch‑up detection can be permanently disabled on a transient DA error.
updateCatchUpState sets catchUpDone before calling GetLatestDAHeight, so a one‑off error means the sequencer will never re‑check catch‑up for the rest of its lifecycle.

💡 Suggested fix to allow retry after DA errors
 func (c *Sequencer) updateCatchUpState(ctx context.Context) {
 	if c.catchUpState.Load() != catchUpUnchecked {
 		return
 	}
-	// Optimistically mark as done; overridden to catchUpInProgress below if
-	// catch-up is actually needed.
-	c.catchUpState.Store(catchUpDone)
-
 	epochSize := c.genesis.DAEpochForcedInclusion
 	if epochSize == 0 {
+		c.catchUpState.Store(catchUpDone)
 		return
 	}
 
-	currentDAHeight := c.checkpoint.DAHeight
-	daStartHeight := c.genesis.DAStartHeight
-
 	latestDAHeight, err := c.daClient.GetLatestDAHeight(ctx)
 	if err != nil {
 		c.logger.Warn().Err(err).
 			Msg("failed to get latest DA height for catch-up detection, skipping check")
 		return
 	}
+
+	currentDAHeight := c.checkpoint.DAHeight
+	daStartHeight := c.genesis.DAStartHeight
 
 	// At head, no catch-up needed
 	if latestDAHeight <= currentDAHeight {
+		c.catchUpState.Store(catchUpDone)
 		return
 	}
@@
 	if missedEpochs <= 1 {
 		c.logger.Debug().
 			Uint64("checkpoint_da_height", currentDAHeight).
@@
 			Uint64("latest_epoch", latestEpoch).
 			Msg("sequencer within one epoch of DA head, no catch-up needed")
+		c.catchUpState.Store(catchUpDone)
 		return
 	}
 
 	// More than one epoch behind - enter catch-up mode
 	c.catchUpState.Store(catchUpInProgress)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/sequencers/single/sequencer.go` around lines 483 - 504,
updateCatchUpState currently marks c.catchUpState as catchUpDone before calling
c.daClient.GetLatestDAHeight, so a transient DA error can permanently disable
catch‑up checks; change the flow in updateCatchUpState so you do not store
catchUpDone until after GetLatestDAHeight succeeds and you’ve determined no
catch‑up is needed (or, alternatively, on GetLatestDAHeight error, do not change
c.catchUpState or explicitly revert it to catchUpUnchecked); reference
updateCatchUpState, c.catchUpState, catchUpDone, catchUpInProgress and
c.daClient.GetLatestDAHeight to locate and implement this fix.
🧹 Nitpick comments (3)
block/internal/executing/executor.go (1)

637-642: e.ctx vs ctx in ProduceBlock.

SetFinal is called with e.ctx while almost every other primary block operation in this method uses the passed ctx parameter. The existing broadcast calls (Lines 619, 624) also use e.ctx, so this is following the established pattern—but it creates a subtle inconsistency where a timeout or deadline on the caller's ctx doesn't propagate to SetFinal.

♻️ Suggested alignment
-		if err := e.exec.SetFinal(e.ctx, newHeight); err != nil {
+		if err := e.exec.SetFinal(ctx, newHeight); err != nil {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@block/internal/executing/executor.go` around lines 637 - 642, In
ProduceBlock, e.exec.SetFinal is being called with e.ctx which prevents the
caller's ctx deadline/cancelation from applying; change the call to use the
passed ctx parameter (i.e., call e.exec.SetFinal(ctx, newHeight)) so SetFinal
honors the ProduceBlock caller context, ensuring consistency with other
operations in ProduceBlock (and keep e.ctx usage only where intentional, e.g.,
long-lived background broadcasts).
block/internal/da/tracing_test.go (1)

57-57: Missing tracing tests for GetLatestDAHeight.

The new GetLatestDAHeight method on tracedClient adds span creation, error recording, and a da.latest_height attribute — all of which are untested. Every other traced method (Submit, Retrieve, Get) has both a success and an error test. GetLatestDAHeight has neither.

🧪 Suggested test additions
+func TestTracedDA_GetLatestDAHeight_Success(t *testing.T) {
+	mock := &mockFullClient{
+		getLatestDAHeightFn: func(_ context.Context) (uint64, error) {
+			return 42, nil
+		},
+	}
+	client, sr := setupDATrace(t, mock)
+
+	height, err := client.GetLatestDAHeight(context.Background())
+	require.NoError(t, err)
+	require.Equal(t, uint64(42), height)
+
+	spans := sr.Ended()
+	require.Len(t, spans, 1)
+	span := spans[0]
+	require.Equal(t, "DA.GetLatestDAHeight", span.Name())
+	require.Equal(t, codes.Unset, span.Status().Code)
+	testutil.RequireAttribute(t, span.Attributes(), "da.latest_height", int64(42))
+}
+
+func TestTracedDA_GetLatestDAHeight_Error(t *testing.T) {
+	mock := &mockFullClient{
+		getLatestDAHeightFn: func(_ context.Context) (uint64, error) {
+			return 0, errors.New("network unavailable")
+		},
+	}
+	client, sr := setupDATrace(t, mock)
+
+	_, err := client.GetLatestDAHeight(context.Background())
+	require.Error(t, err)
+
+	spans := sr.Ended()
+	require.Len(t, spans, 1)
+	span := spans[0]
+	require.Equal(t, codes.Error, span.Status().Code)
+	require.Equal(t, "network unavailable", span.Status().Description)
+}

You would also need to add getLatestDAHeightFn to mockFullClient and update GetLatestDAHeight to invoke it when non-nil.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@block/internal/da/tracing_test.go` at line 57, Add unit tests for
tracedClient.GetLatestDAHeight mirroring other traced method tests: one test
asserting span creation, recorded `da.latest_height` attribute and no error on
success, and one test asserting span error recording and proper error
propagation on failure; update mockFullClient by adding a getLatestDAHeightFn
field and change its GetLatestDAHeight method to call getLatestDAHeightFn when
non-nil (so tests can inject success and error behaviors); place the new tests
in tracing_test.go alongside existing Submit/Retrieve/Get tests and use the same
tracing assertions used elsewhere to verify span creation, attributes, and error
recording.
test/e2e/evm_force_inclusion_e2e_test.go (1)

590-961: Consider waiting for process exit after SIGTERM to reduce flakiness.
Several phases stop and restart nodes on the same ports with only a fixed sleep. A deterministic wait (process wait or RPC-down poll) will make these E2E tests more stable under load.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/evm_force_inclusion_e2e_test.go` around lines 590 - 961, The test
currently sends syscall.SIGTERM to seqProcess, fnProcess and basedSeqProcess and
then sleeps; replace the fixed sleeps with deterministic waits for the process
to exit or the RPC to become unavailable: after signaling
seqProcess/fnProcess/basedSeqProcess call the process Wait method (or use
sut.AwaitNodeDown / poll the node RPC with
endpoints.GetRollkitRPCAddress()/GetFullNodeRPCAddress() until it is down)
before reusing ports or restarting the node, and apply this change wherever
seqProcess, fnProcess or basedSeqProcess is stopped to eliminate racey restarts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@block/internal/da/interface.go`:
- Around line 20-21: The comment above the GetLatestDAHeight method contains a
stray double period (".."); edit the comment for GetLatestDAHeight in
interface.go to use a single period (or remove the trailing punctuation) so it
reads: "// GetLatestDAHeight returns the latest height available on the DA
layer.".

In `@block/internal/syncing/syncer.go`:
- Around line 776-818: Update the block comment to reflect actual behavior:
state.DAHeight stepping (newState.DAHeight) is used to pace safe state
persistence/progression across restarts and does not control which epoch
VerifyForcedInclusionTxs checks, because VerifyForcedInclusionTxs is called with
event.DaHeight and uses that value to fetch/verify forced inclusion
transactions; explicitly state that during sequencer catch-up the code advances
newState.DAHeight by one epoch per block to avoid large persistence jumps, and
add a clear note that this creates a verification gap (forced txs from missed DA
epochs are verified against event.DaHeight/the submission epoch) and document
what external guarantee or mechanism (e.g., deterministic replay, sequencer
constraints, or other safeguard) must ensure catch-up correctness or that this
behavior is intentional.

In `@docs/adr/adr-019-forced-inclusion-mechanism.md`:
- Around line 472-492: Add a language specifier to the fenced code block that
begins with "Sequencer offline during epochs 100-150 (5 epochs of 10 blocks
each)" so markdownlint MD040 is satisfied; change the opening backticks to
```text (i.e., replace ``` with ```text) for that example block in
adr-019-forced-inclusion-mechanism.md.
- Around line 755-756: Replace the awkward sentence "P2P nodes only do not
proceed to any verification." in the NOTE paragraph with a clear phrasing such
as "P2P-only nodes do not perform any verification, because DA inclusion happens
later than block production and DA hints are therefore added to broadcasted
blocks afterward." Update the surrounding sentence to keep the causal
explanation intact and ensure the NOTE reads fluently.

---

Duplicate comments:
In `@pkg/sequencers/single/sequencer.go`:
- Around line 483-504: updateCatchUpState currently marks c.catchUpState as
catchUpDone before calling c.daClient.GetLatestDAHeight, so a transient DA error
can permanently disable catch‑up checks; change the flow in updateCatchUpState
so you do not store catchUpDone until after GetLatestDAHeight succeeds and
you’ve determined no catch‑up is needed (or, alternatively, on GetLatestDAHeight
error, do not change c.catchUpState or explicitly revert it to
catchUpUnchecked); reference updateCatchUpState, c.catchUpState, catchUpDone,
catchUpInProgress and c.daClient.GetLatestDAHeight to locate and implement this
fix.

---

Nitpick comments:
In `@block/internal/da/tracing_test.go`:
- Line 57: Add unit tests for tracedClient.GetLatestDAHeight mirroring other
traced method tests: one test asserting span creation, recorded
`da.latest_height` attribute and no error on success, and one test asserting
span error recording and proper error propagation on failure; update
mockFullClient by adding a getLatestDAHeightFn field and change its
GetLatestDAHeight method to call getLatestDAHeightFn when non-nil (so tests can
inject success and error behaviors); place the new tests in tracing_test.go
alongside existing Submit/Retrieve/Get tests and use the same tracing assertions
used elsewhere to verify span creation, attributes, and error recording.

In `@block/internal/executing/executor.go`:
- Around line 637-642: In ProduceBlock, e.exec.SetFinal is being called with
e.ctx which prevents the caller's ctx deadline/cancelation from applying; change
the call to use the passed ctx parameter (i.e., call e.exec.SetFinal(ctx,
newHeight)) so SetFinal honors the ProduceBlock caller context, ensuring
consistency with other operations in ProduceBlock (and keep e.ctx usage only
where intentional, e.g., long-lived background broadcasts).

In `@test/e2e/evm_force_inclusion_e2e_test.go`:
- Around line 590-961: The test currently sends syscall.SIGTERM to seqProcess,
fnProcess and basedSeqProcess and then sleeps; replace the fixed sleeps with
deterministic waits for the process to exit or the RPC to become unavailable:
after signaling seqProcess/fnProcess/basedSeqProcess call the process Wait
method (or use sut.AwaitNodeDown / poll the node RPC with
endpoints.GetRollkitRPCAddress()/GetFullNodeRPCAddress() until it is down)
before reusing ports or restarting the node, and apply this change wherever
seqProcess, fnProcess or basedSeqProcess is stopped to eliminate racey restarts.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a5ef771 and 4b50a3e.

📒 Files selected for processing (21)
  • CHANGELOG.md
  • apps/evm/server/force_inclusion_test.go
  • block/internal/da/client.go
  • block/internal/da/forced_inclusion_retriever.go
  • block/internal/da/interface.go
  • block/internal/da/tracing.go
  • block/internal/da/tracing_test.go
  • block/internal/executing/executor.go
  • block/internal/syncing/block_syncer.go
  • block/internal/syncing/syncer.go
  • block/internal/syncing/syncer_forced_inclusion_test.go
  • block/internal/syncing/tracing.go
  • block/internal/syncing/tracing_test.go
  • docs/adr/adr-019-forced-inclusion-mechanism.md
  • pkg/config/config.go
  • pkg/sequencers/single/sequencer.go
  • pkg/sequencers/single/sequencer_test.go
  • pkg/store/cached_store.go
  • test/e2e/evm_force_inclusion_e2e_test.go
  • test/mocks/da.go
  • test/testda/dummy.go
💤 Files with no reviewable changes (1)
  • pkg/store/cached_store.go

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
docs/adr/adr-019-forced-inclusion-mechanism.md (1)

756-757: Grammatical issue in the NOTE — previously flagged.

"P2P nodes only do not proceed to any verification" reads awkwardly. Consider: "P2P-only nodes do not perform forced inclusion verification."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/adr/adr-019-forced-inclusion-mechanism.md` around lines 756 - 757, The
NOTE sentence is awkward; replace "P2P nodes only do not proceed to any
verification" with a clearer phrasing such as "P2P-only nodes do not perform
forced inclusion verification." Update the sentence in
adr-019-forced-inclusion-mechanism.md (the NOTE block) to use "P2P-only nodes"
and "perform forced inclusion verification" so the intent is unambiguous.
🧹 Nitpick comments (3)
docs/adr/adr-019-forced-inclusion-mechanism.md (1)

449-501: Catch-up mode documentation is clear and well-structured.

The problem/solution framing, key behaviors list, and step-by-step example provide a solid explanation of the catch-up mechanism. The benefits section clearly ties back to the PR objectives.

One minor note: the example at lines 472-493 uses plain-text formatting without a fenced code block. For readability and consistency with the rest of the ADR (which uses fenced blocks for examples), consider wrapping it in a ```text block.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/adr/adr-019-forced-inclusion-mechanism.md` around lines 449 - 501, The
example block under "Sequencer Catch-Up Mode" is plain text and should be fenced
for consistency; wrap the example (the paragraph starting with "Sequencer
offline during epochs 100-150..." through "Normal operation resumes:") in a
```text fenced code block so it matches the ADR's other examples and improves
readability—update the section containing the example and ensure the fence
surrounds the entire example including the step-by-step numbered list and the
final "Normal operation resumes" lines.
test/e2e/evm_force_inclusion_e2e_test.go (1)

977-977: WithFullNode() option is used but no full node is started in this test.

setupCommonEVMEnv is called with WithFullNode(), which likely allocates additional Docker containers and ports for a full node. Since TestEvmBasedSequencerBaselineE2E only runs a single based sequencer, this may waste test resources. Consider whether you can drop the option, or if the full node ports/endpoints are actually needed downstream (e.g., for the engine/eth URLs).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/evm_force_inclusion_e2e_test.go` at line 977, The test calls
setupCommonEVMEnv with WithFullNode() but never starts/uses a full node, so
remove the unnecessary resource allocation: modify
TestEvmBasedSequencerBaselineE2E to call setupCommonEVMEnv without the
WithFullNode() option; if the test actually needs engine/eth endpoints later,
instead ensure a full node is started and its URLs are propagated (or provide
mocked endpoints) before keeping WithFullNode() — locate the call to
setupCommonEVMEnv and the WithFullNode() symbol to make the change.
block/internal/syncing/syncer.go (1)

797-808: Safety clamp at Line 805-807 is good but the condition is worth documenting further.

The clamp if nextEpochStart > event.DaHeight prevents overshooting, which is a sensible safety net. However, this condition can be hit legitimately — not just as an edge case — when the gap is between one and two epochs (e.g., gap = epochSize + 1 where the current DA height is near the end of an epoch). Consider adding a more specific log level (warn instead of debug) or a metric counter so operators can detect if this clamp fires frequently, which could indicate misconfigured epoch sizes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@block/internal/syncing/syncer.go` around lines 797 - 808, The safety clamp
inside syncer.go (inside the epoch-gap handling around nextEpochStart computed
from types.CalculateEpochBoundaries using newState.DAHeight,
s.genesis.DAStartHeight and epochSize) can legitimately trigger when gap is
between one and two epochs; update the code in the if nextEpochStart >
event.DaHeight block to (1) add a brief comment explaining this legitimate
scenario (gap == epochSize+1 near epoch boundary), (2) replace or raise the log
level to warn (use processLogger.Warnf or the existing logger) instead of debug
so operators see it, and (3) increment a metric/counter (e.g.,
nextEpochClampCounter or NextEpochClampTotal) to track frequency of this clamp;
keep the clamp assignment nextEpochStart = event.DaHeight as-is for safety.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@block/internal/syncing/syncer.go`:
- Around line 793-818: The current logic advances newState.DAHeight by one epoch
when a large gap is detected, which can misalign verification when multiple
blocks carry partial forced-inclusion txs from the same DA epoch (see
sequencer.GetNextBatch and MaxBytes). Instead of advancing only one epoch,
compute the DA epoch start that contains event.DaHeight and set
newState.DAHeight to that epoch start (use
types.CalculateEpochBoundaries(event.DaHeight, s.genesis.DAStartHeight,
epochSize) to get the epoch start) so all blocks from the same epoch verify
against the same DA epoch; keep the safety clamp to not exceed event.DaHeight.

In `@test/e2e/evm_force_inclusion_e2e_test.go`:
- Around line 738-745: The short fixed 1s sleeps after sending SIGTERM to
seqProcess and fnProcess can allow restarts to race with shutdown (syncer's
Stop() uses a 5s drain); replace the sleeps by waiting for the processes to exit
(call seqProcess.Wait() and fnProcess.Wait() and assert no error) or increase
the sleep to >5s so the drain timeout is exceeded before Phase 4 restart; update
the code around seqProcess.Signal(syscall.SIGTERM) and
fnProcess.Signal(syscall.SIGTERM) to perform the chosen wait and failing test on
Wait() errors.
- Around line 863-864: The reconnections reassign seqClient and fnClient so the
earlier defer seqClient.Close() and defer fnClient.Close() close only the
original clients; fix by explicitly closing the previous client before
reassigning (call oldSeq := seqClient; if oldSeq != nil { oldSeq.Close() }
before seqClient = ethclient.Dial(...)) or by using a temporary variable for the
new dial and then replacing the variable and closing the old one, and ensure a
final deferred Close on the last client instance; update both places where
ethclient.Dial is called (references: seqClient, fnClient, ethclient.Dial, and
the existing defer seqClient.Close()/defer fnClient.Close()).
- Around line 753-768: The based sequencer restart call using sut.ExecCmd(ev
mSingleBinaryPath, ...) to start the node as aggregator and based sequencer
(variable basedSeqProcess) omits the required signer passphrase flag; add the
--evnode.signer.passphrase_file flag with the same passphrase file variable used
elsewhere in this test file (the one passed during init/start for other
aggregator/based sequencer runs) so the node can load signing keys when started
with --evnode.node.aggregator=true and --evnode.node.based_sequencer=true.

---

Duplicate comments:
In `@docs/adr/adr-019-forced-inclusion-mechanism.md`:
- Around line 756-757: The NOTE sentence is awkward; replace "P2P nodes only do
not proceed to any verification" with a clearer phrasing such as "P2P-only nodes
do not perform forced inclusion verification." Update the sentence in
adr-019-forced-inclusion-mechanism.md (the NOTE block) to use "P2P-only nodes"
and "perform forced inclusion verification" so the intent is unambiguous.

---

Nitpick comments:
In `@block/internal/syncing/syncer.go`:
- Around line 797-808: The safety clamp inside syncer.go (inside the epoch-gap
handling around nextEpochStart computed from types.CalculateEpochBoundaries
using newState.DAHeight, s.genesis.DAStartHeight and epochSize) can legitimately
trigger when gap is between one and two epochs; update the code in the if
nextEpochStart > event.DaHeight block to (1) add a brief comment explaining this
legitimate scenario (gap == epochSize+1 near epoch boundary), (2) replace or
raise the log level to warn (use processLogger.Warnf or the existing logger)
instead of debug so operators see it, and (3) increment a metric/counter (e.g.,
nextEpochClampCounter or NextEpochClampTotal) to track frequency of this clamp;
keep the clamp assignment nextEpochStart = event.DaHeight as-is for safety.

In `@docs/adr/adr-019-forced-inclusion-mechanism.md`:
- Around line 449-501: The example block under "Sequencer Catch-Up Mode" is
plain text and should be fenced for consistency; wrap the example (the paragraph
starting with "Sequencer offline during epochs 100-150..." through "Normal
operation resumes:") in a ```text fenced code block so it matches the ADR's
other examples and improves readability—update the section containing the
example and ensure the fence surrounds the entire example including the
step-by-step numbered list and the final "Normal operation resumes" lines.

In `@test/e2e/evm_force_inclusion_e2e_test.go`:
- Line 977: The test calls setupCommonEVMEnv with WithFullNode() but never
starts/uses a full node, so remove the unnecessary resource allocation: modify
TestEvmBasedSequencerBaselineE2E to call setupCommonEVMEnv without the
WithFullNode() option; if the test actually needs engine/eth endpoints later,
instead ensure a full node is started and its URLs are propagated (or provide
mocked endpoints) before keeping WithFullNode() — locate the call to
setupCommonEVMEnv and the WithFullNode() symbol to make the change.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4b50a3e and 664c883.

📒 Files selected for processing (3)
  • block/internal/syncing/syncer.go
  • docs/adr/adr-019-forced-inclusion-mechanism.md
  • test/e2e/evm_force_inclusion_e2e_test.go

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/adr/adr-019-forced-inclusion-mechanism.md`:
- Around line 449-501: The example in the "Sequencer Catch-Up Mode" section has
an off-by-one inconsistency: keep the offline range, counts, and later ranges
aligned by making the missed-epochs count match the stated range. Update the
example so that if the sequencer was offline "epochs 100-150 (5 epochs of 10
blocks each)" then the missed epochs line reads "Missed epochs: 5" and the
catch-up ranges start at 101 and end at 150 (and the exit condition becomes
ErrHeightFromFuture at 151); adjust the restart/latest DA/checkpoint values in
that example text to match these same bounds (references: the "Sequencer
Catch-Up Mode" heading and the example paragraph describing
epochs/checkpoint/latest DA/missed epochs).

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 664c883 and 3fc79c3.

📒 Files selected for processing (2)
  • block/internal/da/interface.go
  • docs/adr/adr-019-forced-inclusion-mechanism.md

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
block/internal/syncing/syncer.go (1)

794-818: Add explicit source guard for DA height update to match code patterns elsewhere

The comment on lines 775–776 states this update applies only to DA events, but the code lacks a corresponding event.Source == common.SourceDA guard. While P2P events from p2pHandler do not set DaHeight (it defaults to 0), making the condition effectively safe, similar DA-specific operations elsewhere in this file (e.g., line 631, line 755) use explicit source guards. For consistency and defensive clarity, add the guard:

🔧 Suggested fix
-	if event.DaHeight > newState.DAHeight {
+	if event.Source == common.SourceDA && event.DaHeight > newState.DAHeight {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@block/internal/syncing/syncer.go` around lines 794 - 818, The DA height
update block that sets newState.DAHeight based on event.DaHeight should only run
for DA-origin events; wrap the existing logic (the check if event.DaHeight >
newState.DAHeight, the epochSize/gap logic using
s.genesis.DAEpochForcedInclusion and types.CalculateEpochBoundaries, the
s.logger.Debug() call and the assignment to newState.DAHeight) in an explicit
guard checking event.Source == common.SourceDA so only DA events can change
newState.DAHeight, keeping the epoch-start clamping behavior and logging
unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@block/internal/syncing/syncer.go`:
- Line 752: Update the comment text that currently reads "The checks is actually
only performed on DA only enabled nodes." to correct the grammar: change "checks
is" to "check is" so it reads "The check is actually only performed on DA only
enabled nodes." Locate the comment string in syncer.go (the line containing "The
checks is actually only performed on DA only enabled nodes.") and replace it
with the corrected phrase; keep surrounding punctuation and capitalization
unchanged.

---

Nitpick comments:
In `@block/internal/syncing/syncer.go`:
- Around line 794-818: The DA height update block that sets newState.DAHeight
based on event.DaHeight should only run for DA-origin events; wrap the existing
logic (the check if event.DaHeight > newState.DAHeight, the epochSize/gap logic
using s.genesis.DAEpochForcedInclusion and types.CalculateEpochBoundaries, the
s.logger.Debug() call and the assignment to newState.DAHeight) in an explicit
guard checking event.Source == common.SourceDA so only DA events can change
newState.DAHeight, keeping the epoch-start clamping behavior and logging
unchanged.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3fc79c3 and 5e392ba.

📒 Files selected for processing (1)
  • block/internal/syncing/syncer.go

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
docs/adr/adr-019-forced-inclusion-mechanism.md (1)

470-493: MD040 is resolved; consider adding a code block fence for consistency with other examples.

The previous review comment flagged MD040 (missing language specifier) at line 472 as unaddressed. The current catch-up example (lines 470–493) is formatted as plain prose without a code block fence, which means MD040 no longer applies.

However, all other examples in this document—the Smoothing Example (lines 386–405) and Grace Period Examples (lines 556–616)—are wrapped in fenced code blocks. For consistency, consider wrapping the catch-up example in a ```text code block:

Suggested formatting for consistency
 **Example**:
 
+```text
 Sequencer offline during epochs 100-150 (5 epochs of 10 blocks each)
 Full nodes (base sequencing) produced blocks with forced txs only
 
 Sequencer restarts:
 
 1. Checkpoint DA height: 100
 2. Latest DA height: 150
 3. Missed epochs: 5 (more than 1)
 4. Enter catch-up mode
 
 Catch-up process:
 
 - Epoch 101-110: Produce blocks with forced txs only, use epoch timestamps
 - Epoch 111-120: Continue catch-up...
 - ...
 - Epoch 141-150: Still catching up
 - Epoch 151: ErrHeightFromFuture -> exit catch-up mode
 
 Normal operation resumes:
 
 - Include both forced txs and mempool txs
 - Use current timestamps
+```
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/adr/adr-019-forced-inclusion-mechanism.md` around lines 470 - 493, Wrap
the catch-up example block under the "Catch-up process" / "Sequencer restarts"
example in a fenced code block with a language specifier (use ```text) to match
the other examples (Smoothing Example, Grace Period Examples) and resolve MD040;
ensure the opening fence appears before "Sequencer offline during epochs..." and
the closing fence after "Use current timestamps".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@docs/adr/adr-019-forced-inclusion-mechanism.md`:
- Around line 470-493: Wrap the catch-up example block under the "Catch-up
process" / "Sequencer restarts" example in a fenced code block with a language
specifier (use ```text) to match the other examples (Smoothing Example, Grace
Period Examples) and resolve MD040; ensure the opening fence appears before
"Sequencer offline during epochs..." and the closing fence after "Use current
timestamps".

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5e392ba and d7c1a28.

📒 Files selected for processing (1)
  • docs/adr/adr-019-forced-inclusion-mechanism.md

@julienrbrt julienrbrt requested review from alpe and chatton February 25, 2026 10:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE] Sequencer catchup on restart from base

3 participants