Skip to content

OCPBUGS-38662: node controller: Don't degrade when node is upgrading#2081

Open
tchap wants to merge 6 commits intoopenshift:masterfrom
tchap:node-controller-not-degraded-during-upgrade
Open

OCPBUGS-38662: node controller: Don't degrade when node is upgrading#2081
tchap wants to merge 6 commits intoopenshift:masterfrom
tchap:node-controller-not-degraded-during-upgrade

Conversation

@tchap
Copy link
Copy Markdown
Contributor

@tchap tchap commented Jan 19, 2026

Treat nodes as Degraded only when they are not upgrading.
This prevents Degraded=True in operators from blipping during upgrade.

There is a helper function that returns a node upgrade state enum value called GetNodeUpgradeState. The upgrade state is decided based on relevant MCO annotations on the Node object. This can be used by other controllers in library-go as well.

The idea is then to not mark nodes as Degraded when they are Rebooting or Working, which explicitly excludes draining, because that is unrelated to the node Ready status anyway. This seems the best we can do.


Obsolete Description to Keep Context

Treat nodes as Degraded only when they are not rebooting for upgrade.
This prevents Degraded=True in operators from blipping during upgrade.

Based on https://github.com/openshift/machine-config-operator/blob/b4ea81d6885c61c91e1503d5f9f5ebed696092b1/pkg/daemon/update.go#L2812

Now the issue is that the annotation can be removed when the reboot is finished before the node becomes Ready. But this should be like seconds before it's Ready. On top of that, there is a 2-minute inertia for Degraded on the operator status, so this should be plenty to cover the gap. Another potential issue is that failing to update the annotations is only logged, it does not prevent the reboot. So we may have rebooting nodes missing the annotation.

An alternative would be ignore nodes with machineconfiguration.openshift.io/state == Working, but that seems to be too broad a condition as it covers basically the whole upgrade process, so that can show too positive view of the system. But it is a possibility.

There may be other ways to implement this for sure, but that would require pulling in other objects like machineconfignode for the nodes, where you can check the timestamp of the last reboot, for example. This doesn't seem particularly necessary as the inertia should cover vast majority of our use cases. So I focused on what we already have, which are the Node objects.

As discussed, I also ended up adding a Degraded inertia here, which means that the rebooting node is only ignored for certain period of time, which is by default 10 minutes. In case something goes very wrong during the upgrade...

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jan 19, 2026
@openshift-ci openshift-ci bot requested review from dgrisonnet and tkashem January 19, 2026 15:35
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Jan 19, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: tchap
Once this PR has been reviewed and has the lgtm label, please assign tkashem for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@tchap tchap changed the title WIP: node controller: Don't Degrade when node Rebooting WIP: OCPBUGS-38662: node controller: Don't Degrade when node Rebooting Jan 19, 2026
@openshift-ci-robot openshift-ci-robot added jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. labels Jan 19, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@tchap: This pull request references Jira Issue OCPBUGS-38662, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state New, which is one of the valid states (NEW, ASSIGNED, POST)

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

Treat nodes as Degraded only when they are not rebooting for upgrade.
This prevents Degraded=True in operators from bliping during upgrade.

Based on https://github.com/openshift/machine-config-operator/blob/b4ea81d6885c61c91e1503d5f9f5ebed696092b1/pkg/daemon/update.go#L2812

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@tchap tchap changed the title WIP: OCPBUGS-38662: node controller: Don't Degrade when node Rebooting WIP: OCPBUGS-38662: node controller: Don't degrade when node rebooting for upgrade Jan 20, 2026
@tchap tchap force-pushed the node-controller-not-degraded-during-upgrade branch from 03571ff to 6af210e Compare January 20, 2026 10:24
@openshift-ci-robot
Copy link
Copy Markdown

@tchap: This pull request references Jira Issue OCPBUGS-38662, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

Treat nodes as Degraded only when they are not rebooting for upgrade.
This prevents Degraded=True in operators from bliping during upgrade.

Based on https://github.com/openshift/machine-config-operator/blob/b4ea81d6885c61c91e1503d5f9f5ebed696092b1/pkg/daemon/update.go#L2812

Now the issue is that the annotation can be removed before the node becomes Ready. But this should be like seconds before it's Ready. On top of that, there is a 2-minute inertia for Degraded on the operator status, so this should be plenty to cover the gap.

An alternative would be ignore nodes with machineconfiguration.openshift.io/state == Working, but that seems to be too broad a condition as it covers basically the whole upgrade process.

There may be other ways to implement this for sure, but that would require pulling in other objects like machineconfignode for the nodes, where you can check the timestamp of the last reboot, for example. This doesn't seem particularly necessary as the inertia should cover vast majority of our use cases.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@tchap tchap force-pushed the node-controller-not-degraded-during-upgrade branch from b172632 to 800afca Compare January 20, 2026 13:47
@tchap tchap changed the title WIP: OCPBUGS-38662: node controller: Don't degrade when node rebooting for upgrade OCPBUGS-38662: node controller: Don't degrade when node rebooting for upgrade Jan 20, 2026
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jan 20, 2026
@tchap tchap force-pushed the node-controller-not-degraded-during-upgrade branch from 800afca to 43be5d6 Compare January 20, 2026 13:52
Comment thread pkg/operator/staticpod/controller/node/node_controller.go Outdated
@tchap tchap force-pushed the node-controller-not-degraded-during-upgrade branch 2 times, most recently from 4b67aaf to 2dd855e Compare January 20, 2026 14:49
}

func nodeRebootingForUpgrade(node *coreapiv1.Node) bool {
return node.Annotations[machineConfigDaemonPostConfigAction] == machineConfigDaemonStateRebooting
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we start a conversation with the mco team or the forum-ocp-updates to check what information is available during an upgrade that we could use?

)

// DefaultRebootingNodeDegradedInertia is the default period during which a node rebooting for upgrade is not considered Degraded.
const DefaultRebootingNodeDegradedInertia = 10 * time.Minute
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually need to decide on the default value based on how long it usually takes to restart.

Filip proposed something around an hour, but needs to be verified.

@tchap tchap force-pushed the node-controller-not-degraded-during-upgrade branch 2 times, most recently from 9b6527a to eeb0eae Compare January 23, 2026 09:58
degradedMsg = fmt.Sprintf("node %q not ready since %s because %s (%s)", node.Name, nodeReadyCondition.LastTransitionTime, nodeReadyCondition.Reason, nodeReadyCondition.Message)
}
if len(degradedMsg) > 0 {
if nodeRebootingForUpgrade(node) && !shouldDegradeRebootingNode(nodeReadyCondition, c.rebootingNodeDegradedInertia) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m wondering what the chances are that this is the only controller that becomes degraded during an upgrade. Something tells me there might be many more controllers that are unhappy.

I would prefer a more global solution to this problem.

Also, silencing a degraded condition for two hours seems excessive.

What about using StatusSyncer.WithDegradedInertia? I think it has the potential to be a broader solution.

What is the maximum time after a node becomes degraded once the kubelet stops posting status updates?

Thoughts ?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@p0lyn0mial just digging this up again with Ondra. I've been using the degraded inertia on CEO for a month now and it has been quite good at shutting up those blips.

Would you be open to discuss this further in openshift/cluster-kube-apiserver-operator#2052 for KAS-O?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, could we try the #2025 PR first? It is less invasive and covers all controllers.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is really about doing one OR the other. Because we still need to detect upgrades somehow. They take longer than 10 mins without any problems, particularly on bare metal. Increasing the threshold to 10 or 15 minutes generally should still be used mostly for degradation when NOT upgrading.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is really about doing one OR the other. Because we still need to detect upgrades somehow.

Isn’t this a platform issue, then? Should it be resolved at the platform level so that all controllers and operators are aware of the ongoing upgrade? (I think ehere was a KEP around this)

/cc @benluddy

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@p0lyn0mial Yes, absolutely. There is for example kubernetes/enhancements#5769 . But we need to do something about this before this is available. The KEP is not even merged yet. So we basically decided to do (passing what @atiratree wrote on Slack and I followed by updating this PR):

  1. Degrade immediately if we know that something is definitely wrong (0 replicas available, etcd lost quorum, no leader present after the election timeout)
  2. Degrade with a small inertia in general if 1. is okay (15m which @tjungblu experimented with and should be sufficient to cover usual disruption cases)
  3. Degrade with a large inertia if 1. is okay and if we can detect that node is being shutdown/upgraded (1-2h? similar to what @tchap proposes in his PR)

Point 3 is relevant in our case to node controller and static pod controller set. And yes, we can share some helper functions like I implemented, but currently the controllers need to decide how to handle node upgrades, I guess.

Treat nodes as Degraded only when they are not rebooting for upgrade.
This prevents Degraded=True from bliping during upgrade.
The node is yet again considered Degraded when it's rebooting for too
long. The default inertia interval is set to 2 hours.
@tchap tchap force-pushed the node-controller-not-degraded-during-upgrade branch from eeb0eae to a25ba0e Compare February 23, 2026 11:45
@tchap tchap changed the title OCPBUGS-38662: node controller: Don't degrade when node rebooting for upgrade OCPBUGS-38662: node controller: Don't degrade when node is upgrading Feb 23, 2026
We mark a node as upgrading when it's Rebooting or Working, but not
Draining as that can take a long time and we don't need to include that.
@tchap tchap force-pushed the node-controller-not-degraded-during-upgrade branch from 7851be9 to fffc58c Compare February 23, 2026 12:15
Comment thread pkg/operator/staticpod/controller/node/node_state.go Outdated
lastAppliedDrain := node.Annotations[machineConfigLastAppliedDrain]
if desiredDrain != "" && lastAppliedDrain != desiredDrain {
return NodeUpgradeStateDraining
}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be also interesting to report cordoned (.spec.schedulable state) for non-static pod workloads since they will most likely will not be able to schedule.

Copy link
Copy Markdown
Contributor Author

@tchap tchap Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the caller can check this on their own TBH. This function is mainly for hiding the complexity regarding the annotations. The node is also cordoned while draining, so actually these are overlapping, meaning that the caller would have to know that Draining also implies Cordoned, but for that they can simply check the field themselves.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, it is okay for now. We might revisit this later, when we need this kind of functionality again.

Comment thread pkg/operator/staticpod/controller/node/node_state_test.go Outdated

degradedCondition = degradedCondition.WithReason("MasterNodesReady")

if len(degradedNodes) > 0 {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might be also useful to consider the total number of healthy and unhealthy nodes. E.g. by defining a maxUnavailable, and if we go over that number we could report degraded right away. We could set a default to 1 and let it be configurable in the NodeController constructor.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No problem with that, but we can do that in a separate PR, I think.

Comment thread pkg/operator/staticpod/controller/node/node_controller_test.go
There are other actions available, we need to ensure the action prefix
is "drain-".
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Feb 23, 2026

@tchap: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-ci openshift-ci bot requested a review from benluddy March 2, 2026 13:20
@sdodson
Copy link
Copy Markdown
Member

sdodson commented Mar 25, 2026

I think one of our design goals was not to allow components to care about whether or not an upgrade is in progress. MCO gets to care about this because it's responsible for handling this, but no one else should AFAIK.

Asking for discussion in #forum-ocp-arch

@tchap
Copy link
Copy Markdown
Contributor Author

tchap commented Mar 25, 2026

@sdodson I will wait for the discussing there, but from what I remember right now, this is a broader question related to status conditions. The logic in the node controller is rather trivial. Then there is the whole static pod machinery used for KAS, KCM and what not. The relevant operators degrade when the created pods are not running. This is the StaticPodStateController, which basically iterates over all nodes and checks the right pod is in the right state. I am not sure now how to handle this without knowing if some nodes are upgrading or not, i.e. if node is upgrading, expect the pod can't be running there and don't degrade.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants