OCPBUGS-38662: node controller: Don't degrade when node is upgrading#2081
OCPBUGS-38662: node controller: Don't degrade when node is upgrading#2081tchap wants to merge 6 commits intoopenshift:masterfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: tchap The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
@tchap: This pull request references Jira Issue OCPBUGS-38662, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
The bug has been updated to refer to the pull request using the external bug tracker. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
03571ff to
6af210e
Compare
|
@tchap: This pull request references Jira Issue OCPBUGS-38662, which is valid. 3 validation(s) were run on this bug
The bug has been updated to refer to the pull request using the external bug tracker. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
b172632 to
800afca
Compare
800afca to
43be5d6
Compare
4b67aaf to
2dd855e
Compare
| } | ||
|
|
||
| func nodeRebootingForUpgrade(node *coreapiv1.Node) bool { | ||
| return node.Annotations[machineConfigDaemonPostConfigAction] == machineConfigDaemonStateRebooting |
There was a problem hiding this comment.
could we start a conversation with the mco team or the forum-ocp-updates to check what information is available during an upgrade that we could use?
| ) | ||
|
|
||
| // DefaultRebootingNodeDegradedInertia is the default period during which a node rebooting for upgrade is not considered Degraded. | ||
| const DefaultRebootingNodeDegradedInertia = 10 * time.Minute |
There was a problem hiding this comment.
Actually need to decide on the default value based on how long it usually takes to restart.
Filip proposed something around an hour, but needs to be verified.
9b6527a to
eeb0eae
Compare
| degradedMsg = fmt.Sprintf("node %q not ready since %s because %s (%s)", node.Name, nodeReadyCondition.LastTransitionTime, nodeReadyCondition.Reason, nodeReadyCondition.Message) | ||
| } | ||
| if len(degradedMsg) > 0 { | ||
| if nodeRebootingForUpgrade(node) && !shouldDegradeRebootingNode(nodeReadyCondition, c.rebootingNodeDegradedInertia) { |
There was a problem hiding this comment.
I’m wondering what the chances are that this is the only controller that becomes degraded during an upgrade. Something tells me there might be many more controllers that are unhappy.
I would prefer a more global solution to this problem.
Also, silencing a degraded condition for two hours seems excessive.
What about using StatusSyncer.WithDegradedInertia? I think it has the potential to be a broader solution.
What is the maximum time after a node becomes degraded once the kubelet stops posting status updates?
Thoughts ?
There was a problem hiding this comment.
@p0lyn0mial just digging this up again with Ondra. I've been using the degraded inertia on CEO for a month now and it has been quite good at shutting up those blips.
Would you be open to discuss this further in openshift/cluster-kube-apiserver-operator#2052 for KAS-O?
There was a problem hiding this comment.
yeah, could we try the #2025 PR first? It is less invasive and covers all controllers.
There was a problem hiding this comment.
I don't think this is really about doing one OR the other. Because we still need to detect upgrades somehow. They take longer than 10 mins without any problems, particularly on bare metal. Increasing the threshold to 10 or 15 minutes generally should still be used mostly for degradation when NOT upgrading.
There was a problem hiding this comment.
I don't think this is really about doing one OR the other. Because we still need to detect upgrades somehow.
Isn’t this a platform issue, then? Should it be resolved at the platform level so that all controllers and operators are aware of the ongoing upgrade? (I think ehere was a KEP around this)
/cc @benluddy
There was a problem hiding this comment.
@p0lyn0mial Yes, absolutely. There is for example kubernetes/enhancements#5769 . But we need to do something about this before this is available. The KEP is not even merged yet. So we basically decided to do (passing what @atiratree wrote on Slack and I followed by updating this PR):
- Degrade immediately if we know that something is definitely wrong (0 replicas available, etcd lost quorum, no leader present after the election timeout)
- Degrade with a small inertia in general if 1. is okay (15m which @tjungblu experimented with and should be sufficient to cover usual disruption cases)
- Degrade with a large inertia if 1. is okay and if we can detect that node is being shutdown/upgraded (1-2h? similar to what @tchap proposes in his PR)
Point 3 is relevant in our case to node controller and static pod controller set. And yes, we can share some helper functions like I implemented, but currently the controllers need to decide how to handle node upgrades, I guess.
Treat nodes as Degraded only when they are not rebooting for upgrade. This prevents Degraded=True from bliping during upgrade.
The node is yet again considered Degraded when it's rebooting for too long. The default inertia interval is set to 2 hours.
eeb0eae to
a25ba0e
Compare
We mark a node as upgrading when it's Rebooting or Working, but not Draining as that can take a long time and we don't need to include that.
7851be9 to
fffc58c
Compare
| lastAppliedDrain := node.Annotations[machineConfigLastAppliedDrain] | ||
| if desiredDrain != "" && lastAppliedDrain != desiredDrain { | ||
| return NodeUpgradeStateDraining | ||
| } |
There was a problem hiding this comment.
It might be also interesting to report cordoned (.spec.schedulable state) for non-static pod workloads since they will most likely will not be able to schedule.
There was a problem hiding this comment.
I think the caller can check this on their own TBH. This function is mainly for hiding the complexity regarding the annotations. The node is also cordoned while draining, so actually these are overlapping, meaning that the caller would have to know that Draining also implies Cordoned, but for that they can simply check the field themselves.
There was a problem hiding this comment.
Sure, it is okay for now. We might revisit this later, when we need this kind of functionality again.
|
|
||
| degradedCondition = degradedCondition.WithReason("MasterNodesReady") | ||
|
|
||
| if len(degradedNodes) > 0 { |
There was a problem hiding this comment.
it might be also useful to consider the total number of healthy and unhealthy nodes. E.g. by defining a maxUnavailable, and if we go over that number we could report degraded right away. We could set a default to 1 and let it be configurable in the NodeController constructor.
There was a problem hiding this comment.
No problem with that, but we can do that in a separate PR, I think.
There are other actions available, we need to ensure the action prefix is "drain-".
|
@tchap: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
I think one of our design goals was not to allow components to care about whether or not an upgrade is in progress. MCO gets to care about this because it's responsible for handling this, but no one else should AFAIK. Asking for discussion in #forum-ocp-arch |
|
@sdodson I will wait for the discussing there, but from what I remember right now, this is a broader question related to status conditions. The logic in the node controller is rather trivial. Then there is the whole static pod machinery used for KAS, KCM and what not. The relevant operators degrade when the created pods are not running. This is the |
Treat nodes as
Degradedonly when they are not upgrading.This prevents
Degraded=Truein operators from blipping during upgrade.There is a helper function that returns a node upgrade state enum value called
GetNodeUpgradeState. The upgrade state is decided based on relevant MCO annotations on the Node object. This can be used by other controllers in library-go as well.The idea is then to not mark nodes as Degraded when they are Rebooting or Working, which explicitly excludes draining, because that is unrelated to the node Ready status anyway. This seems the best we can do.
Obsolete Description to Keep Context
Treat nodes as
Degradedonly when they are not rebooting for upgrade.This prevents
Degraded=Truein operators from blipping during upgrade.Based on https://github.com/openshift/machine-config-operator/blob/b4ea81d6885c61c91e1503d5f9f5ebed696092b1/pkg/daemon/update.go#L2812
Now the issue is that the annotation can be removed when the reboot is finished before the node becomes
Ready. But this should be like seconds before it'sReady. On top of that, there is a 2-minute inertia forDegradedon the operator status, so this should be plenty to cover the gap. Another potential issue is that failing to update the annotations is only logged, it does not prevent the reboot. So we may have rebooting nodes missing the annotation.An alternative would be ignore nodes with
machineconfiguration.openshift.io/state == Working, but that seems to be too broad a condition as it covers basically the whole upgrade process, so that can show too positive view of the system. But it is a possibility.There may be other ways to implement this for sure, but that would require pulling in other objects like
machineconfignodefor the nodes, where you can check the timestamp of the last reboot, for example. This doesn't seem particularly necessary as the inertia should cover vast majority of our use cases. So I focused on what we already have, which are theNodeobjects.As discussed, I also ended up adding a
Degradedinertia here, which means that the rebooting node is only ignored for certain period of time, which is by default 10 minutes. In case something goes very wrong during the upgrade...