Skip to content

OCPBUGS-78016: Fix double counting of pod restart events#30862

Open
xueqzhan wants to merge 2 commits intoopenshift:mainfrom
xueqzhan:double-pod-termination
Open

OCPBUGS-78016: Fix double counting of pod restart events#30862
xueqzhan wants to merge 2 commits intoopenshift:mainfrom
xueqzhan:double-pod-termination

Conversation

@xueqzhan
Copy link
Contributor

@xueqzhan xueqzhan commented Mar 11, 2026

Summary by CodeRabbit

  • Bug Fixes
    • Prevented duplicate container termination events in pod monitoring. Improved detection now skips repeated termination records and emits a single, accurate event (error or normal completion), ensuring correct event counts and more reliable pod lifecycle dashboards and alerts.

@openshift-ci-robot
Copy link

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: automatic mode

@openshift-ci-robot openshift-ci-robot added jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Mar 11, 2026
@openshift-ci-robot
Copy link

@xueqzhan: This pull request references Jira Issue OCPBUGS-78016, which is invalid:

  • expected the bug to target the "4.22.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@coderabbitai
Copy link

coderabbitai bot commented Mar 11, 2026

Walkthrough

Added an alreadyRecorded guard to container termination handling to avoid emitting duplicate termination intervals by comparing FinishedAt; when not previously recorded, the code still emits Error (non‑zero ExitCode) or Info (zero ExitCode) intervals using LastTerminationState.Terminated.

Changes

Cohort / File(s) Summary
Container Exit Deduplication
pkg/monitortests/node/watchpods/collection.go
Introduced an alreadyRecorded check in terminated-state handling to skip duplicate termination events based on FinishedAt comparison; when not already recorded, preserves existing behavior of emitting Error (ExitCode != 0) or Info (ExitCode == 0) intervals using LastTerminationState.Terminated.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'OCPBUGS-78016: Fix double counting of pod restart events' directly and clearly describes the main change: adding a guard to prevent duplicate pod restart event emissions.
Stable And Deterministic Test Names ✅ Passed The modified file contains pod monitoring and event collection logic with no Ginkgo test definitions, making the stable test names check inapplicable.
Test Structure And Quality ✅ Passed The custom check is designed to review Ginkgo test code for quality requirements, but this PR modifies production code (collection.go) that implements pod monitoring functionality, not Ginkgo test code.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.5.0)

Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions
The command is terminated due to an error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions


Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can generate a title for your PR based on the changes with custom instructions.

Set the reviews.auto_title_instructions setting to generate a title for your PR based on the changes in the PR with custom instructions.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/monitortests/node/watchpods/collection.go`:
- Around line 221-253: The two code paths for recording container exit in the
LastTerminationState branch are inconsistent: in the !alreadyRecorded branch the
ExitCode == 0 path uses monitorapi.Info while the other (existing) path uses
monitorapi.Error; change the new branch to use monitorapi.Error instead of
monitorapi.Info so both paths use the same severity. Locate the !alreadyRecorded
block that constructs intervals with
containerStatus.LastTerminationState.Terminated and replace the monitorapi.Info
call in the ExitCode == 0 branch with monitorapi.Error so the interval level
matches the existing behavior.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 45e82f29-a3bb-4691-ae5b-2d62d2989052

📥 Commits

Reviewing files that changed from the base of the PR and between 02b4172 and 90a6967.

📒 Files selected for processing (1)
  • pkg/monitortests/node/watchpods/collection.go

Comment on lines +221 to 253
// if we are transitioning to a terminated state in LastTerminationState
// Check if we already recorded this exit when it was in State.Terminated
// If oldContainerStatus.State.Terminated matches the current LastTerminationState.Terminated,
// then we already recorded this exit and should skip to avoid double-counting
alreadyRecorded := oldContainerStatus.State.Terminated != nil &&
oldContainerStatus.State.Terminated.FinishedAt.Equal(&containerStatus.LastTerminationState.Terminated.FinishedAt)

if !alreadyRecorded {
// We missed the original exit event, record it now as a safety net
if containerStatus.LastTerminationState.Terminated.ExitCode != 0 {
intervals = append(intervals,
monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error).
Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
Message(monitorapi.NewMessage().
Reason(monitorapi.ContainerReasonContainerExit).
WithAnnotation(monitorapi.AnnotationContainerExitCode, fmt.Sprintf("%d", containerStatus.LastTerminationState.Terminated.ExitCode)).
Cause(containerStatus.LastTerminationState.Terminated.Reason).
HumanMessage(containerStatus.LastTerminationState.Terminated.Message),
).BuildNow(),
)
} else {
intervals = append(intervals,
monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Info).
Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
Message(monitorapi.NewMessage().
Reason(monitorapi.ContainerReasonContainerExit).
WithAnnotation(monitorapi.AnnotationContainerExitCode, "0").
Cause(containerStatus.LastTerminationState.Terminated.Reason).
HumanMessage(containerStatus.LastTerminationState.Terminated.Message)).
BuildNow(),
)
}
}
Copy link

@coderabbitai coderabbitai bot Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Inconsistent interval level for exit code 0 between new and existing code paths.

The alreadyRecorded guard logic correctly prevents double-counting by comparing FinishedAt timestamps.

However, there's an inconsistency in interval levels:

  • New code (line 243): Uses monitorapi.Info when ExitCode == 0
  • Existing code (line 271): Uses monitorapi.Error when ExitCode == 0

This means a successful container exit (code 0) could be logged as Info or Error depending on which code path fires first, leading to inconsistent monitoring data.

Please align the interval levels—either both should use Info for successful exits, or both should use Error.

🔧 Proposed fix to align with existing behavior (use Error)
 				} else {
 					intervals = append(intervals,
-						monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Info).
+						monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error).
 							Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
 							Message(monitorapi.NewMessage().
 								Reason(monitorapi.ContainerReasonContainerExit).
🔧 Alternative fix: Update existing code to use Info for exit code 0

If Info is semantically correct for successful exits, update line 271 as well:

 			} else {
 				intervals = append(intervals,
-					monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error).
+					monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Info).
 						Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
 						Message(monitorapi.NewMessage().
 							Reason(monitorapi.ContainerReasonContainerExit).
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// if we are transitioning to a terminated state in LastTerminationState
// Check if we already recorded this exit when it was in State.Terminated
// If oldContainerStatus.State.Terminated matches the current LastTerminationState.Terminated,
// then we already recorded this exit and should skip to avoid double-counting
alreadyRecorded := oldContainerStatus.State.Terminated != nil &&
oldContainerStatus.State.Terminated.FinishedAt.Equal(&containerStatus.LastTerminationState.Terminated.FinishedAt)
if !alreadyRecorded {
// We missed the original exit event, record it now as a safety net
if containerStatus.LastTerminationState.Terminated.ExitCode != 0 {
intervals = append(intervals,
monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error).
Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
Message(monitorapi.NewMessage().
Reason(monitorapi.ContainerReasonContainerExit).
WithAnnotation(monitorapi.AnnotationContainerExitCode, fmt.Sprintf("%d", containerStatus.LastTerminationState.Terminated.ExitCode)).
Cause(containerStatus.LastTerminationState.Terminated.Reason).
HumanMessage(containerStatus.LastTerminationState.Terminated.Message),
).BuildNow(),
)
} else {
intervals = append(intervals,
monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Info).
Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
Message(monitorapi.NewMessage().
Reason(monitorapi.ContainerReasonContainerExit).
WithAnnotation(monitorapi.AnnotationContainerExitCode, "0").
Cause(containerStatus.LastTerminationState.Terminated.Reason).
HumanMessage(containerStatus.LastTerminationState.Terminated.Message)).
BuildNow(),
)
}
}
// if we are transitioning to a terminated state in LastTerminationState
// Check if we already recorded this exit when it was in State.Terminated
// If oldContainerStatus.State.Terminated matches the current LastTerminationState.Terminated,
// then we already recorded this exit and should skip to avoid double-counting
alreadyRecorded := oldContainerStatus.State.Terminated != nil &&
oldContainerStatus.State.Terminated.FinishedAt.Equal(&containerStatus.LastTerminationState.Terminated.FinishedAt)
if !alreadyRecorded {
// We missed the original exit event, record it now as a safety net
if containerStatus.LastTerminationState.Terminated.ExitCode != 0 {
intervals = append(intervals,
monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error).
Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
Message(monitorapi.NewMessage().
Reason(monitorapi.ContainerReasonContainerExit).
WithAnnotation(monitorapi.AnnotationContainerExitCode, fmt.Sprintf("%d", containerStatus.LastTerminationState.Terminated.ExitCode)).
Cause(containerStatus.LastTerminationState.Terminated.Reason).
HumanMessage(containerStatus.LastTerminationState.Terminated.Message),
).BuildNow(),
)
} else {
intervals = append(intervals,
monitorapi.NewInterval(monitorapi.SourcePodMonitor, monitorapi.Error).
Locator(monitorapi.NewLocator().ContainerFromPod(pod, containerName)).
Message(monitorapi.NewMessage().
Reason(monitorapi.ContainerReasonContainerExit).
WithAnnotation(monitorapi.AnnotationContainerExitCode, "0").
Cause(containerStatus.LastTerminationState.Terminated.Reason).
HumanMessage(containerStatus.LastTerminationState.Terminated.Message)).
BuildNow(),
)
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/monitortests/node/watchpods/collection.go` around lines 221 - 253, The
two code paths for recording container exit in the LastTerminationState branch
are inconsistent: in the !alreadyRecorded branch the ExitCode == 0 path uses
monitorapi.Info while the other (existing) path uses monitorapi.Error; change
the new branch to use monitorapi.Error instead of monitorapi.Info so both paths
use the same severity. Locate the !alreadyRecorded block that constructs
intervals with containerStatus.LastTerminationState.Terminated and replace the
monitorapi.Info call in the ExitCode == 0 branch with monitorapi.Error so the
interval level matches the existing behavior.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This wasn't really new code. It just puts the old code in a new scope.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xueqzhan, you're right — looking at the diff more carefully, the monitorapi.Info for exit code 0 was already present in the original code before this PR. The change only wrapped the existing logic in the if !alreadyRecorded guard without altering the interval levels. My comment was incorrect. Sorry for the noise!

(ᵕ—ᴗ—)

@openshift-ci openshift-ci bot requested review from deads2k and sjenning March 11, 2026 15:43
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 11, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: xueqzhan

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 11, 2026
@xueqzhan
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 11, 2026

@xueqzhan: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/6c7edd10-1d62-11f1-90ca-56d2448f888b-0

@openshift-ci-robot
Copy link

Scheduling required tests:
/test e2e-aws-csi
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi

@xueqzhan
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 12, 2026

@xueqzhan: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/9aa03e20-1e3c-11f1-8846-9847a77dd631-0

@openshift-ci-robot
Copy link

@xueqzhan: This pull request references Jira Issue OCPBUGS-78016, which is invalid:

  • expected the bug to target the "4.22.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

Details

In response to this:

Summary by CodeRabbit

  • Bug Fixes
  • Prevented duplicate container termination events in pod monitoring. Improved detection now skips repeated termination records and emits a single, accurate event (error or normal completion), ensuring correct event counts and more reliable pod lifecycle dashboards and alerts.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/monitortests/node/watchpods/collection.go`:
- Around line 221-229: The current guard using alreadyRecorded (comparing
oldContainerStatus.State.Terminated to
containerStatus.LastTerminationState.Terminated.FinishedAt) can suppress the
only emission when the first observed status is Terminated; instead, change the
logic to dedupe against an explicit record of emissions: add an
emitted-termination registry keyed by the unique container identifier (e.g.,
pod+container name or UID) that stores the emitted termination FinishedAt
timestamp, update that registry when you emit a termination, and replace the
alreadyRecorded check to compare
containerStatus.LastTerminationState.Terminated.FinishedAt against the registry
entry (skip only if the same FinishedAt was already emitted); ensure you update
the registry when emitting in the code paths that currently call the emission
logic so future transitions into LastTerminationState are correctly recognized
as already emitted.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 179cffda-6591-4976-84aa-3cc7f505373e

📥 Commits

Reviewing files that changed from the base of the PR and between 90a6967 and 70264ef.

📒 Files selected for processing (1)
  • pkg/monitortests/node/watchpods/collection.go

Comment on lines +221 to +229
// if we are transitioning to a terminated state in LastTerminationState
// Check if we already recorded this exit when it was in State.Terminated
// If oldContainerStatus.State.Terminated matches the current LastTerminationState.Terminated,
// then we already recorded this exit and should skip to avoid double-counting
alreadyRecorded := oldContainerStatus.State.Terminated != nil &&
containerStatus.State.Terminated == nil &&
oldContainerStatus.State.Terminated.FinishedAt.Equal(&containerStatus.LastTerminationState.Terminated.FinishedAt)

if !alreadyRecorded {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

This guard drops the safety-net exit after an initial list.

alreadyRecorded uses oldContainerStatus.State.Terminated as a proxy for “we already emitted this exit”, but Lines 215-219 explicitly skip emission when the first observed status is already terminated. If the watcher first sees State.Terminated on create/initial list and the next update moves that same record into LastTerminationState.Terminated, Lines 225-227 evaluate true and suppress the only exit interval. Please dedupe against actual emitted terminations instead of inferring it from the previous status shape.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/monitortests/node/watchpods/collection.go` around lines 221 - 229, The
current guard using alreadyRecorded (comparing
oldContainerStatus.State.Terminated to
containerStatus.LastTerminationState.Terminated.FinishedAt) can suppress the
only emission when the first observed status is Terminated; instead, change the
logic to dedupe against an explicit record of emissions: add an
emitted-termination registry keyed by the unique container identifier (e.g.,
pod+container name or UID) that stores the emitted termination FinishedAt
timestamp, update that registry when you emit a termination, and replace the
alreadyRecorded check to compare
containerStatus.LastTerminationState.Terminated.FinishedAt against the registry
entry (skip only if the same FinishedAt was already emitted); ensure you update
the registry when emitting in the code paths that currently call the emission
logic so future transitions into LastTerminationState are correctly recognized
as already emitted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants