-
Notifications
You must be signed in to change notification settings - Fork 243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-52280: Move to use newer IPsec DaemonSets irrespective of MCP state #2454
OCPBUGS-52280: Move to use newer IPsec DaemonSets irrespective of MCP state #2454
Conversation
@pperiyasamy: This pull request references Jira Issue OCPBUGS-36688, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
Requesting review from QA contact: The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/assign @yuvalk @jcaamano @anuragthehatter @huiran0826 |
b32b067
to
93d9013
Compare
/retest |
/test e2e-aws-ovn-ipsec-upgrade |
/test e2e-ovn-ipsec-step-registry |
@pperiyasamy: This pull request references Jira Issue OCPBUGS-36688, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am fine with checking the "paused" spec field of the pools for now.
@@ -288,6 +309,12 @@ spec: | |||
- -c | |||
- | | |||
#!/bin/bash | |||
{{ if .IPsecCheckForLibreswan }} | |||
if rpm --dbpath=/usr/share/rpm -q libreswan; then | |||
echo "host has libreswan and therefore ipsec will be configured by ipsec host daemonset, this ovn ipsec container is always \"alive\"" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean here with is always alive
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is just to keep liveness probe to succeed every time (when host flavor actually serving ipsec as host already installed with libreswan), otherwise this pod would crashloop.
pkg/network/ovn_kubernetes.go
Outdated
data.Data["IPsecMachineConfigEnable"] = IPsecMachineConfigEnable | ||
data.Data["OVNIPsecDaemonsetEnable"] = OVNIPsecDaemonsetEnable | ||
data.Data["OVNIPsecEnable"] = OVNIPsecEnable | ||
data.Data["IPsecCheckForLibreswan"] = renderBothIPsecDemonSetsWhenAPoolPausedState |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couldn't this just be
data.Data["IPsecCheckForLibreswan"] = renderIPsecHostDaemonSet && renderIPsecContainerizedDaemonSet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, done.
pkg/network/ovn_kubernetes.go
Outdated
machineConfigPoolPaused := isThereAnyMachineConfigPoolPaused(bootstrapResult.Infra) | ||
isIPsecMachineConfigActiveInUnPausedPools := isIPsecMachineConfigActive(bootstrapResult.Infra, true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would move these two variables to the same block where renderBothIPsecDemonSetsWhenAPoolPausedState
is defined. And then I would elaborate a bit more the comment of that block saying that if there are unpaused pools, we wait until thos poolls have the ipsec machine config active before deploying both daemonsets
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
pkg/network/ovn_kubernetes.go
Outdated
@@ -653,7 +664,7 @@ func shouldRenderIPsec(conf *operv1.OVNKubernetesConfig, bootstrapResult *bootst | |||
|
|||
// While OVN ipsec is being upgraded and IPsec MachineConfigs deployment is in progress | |||
// (or) IPsec config in OVN is being disabled, then ipsec deployment is not updated. | |||
renderIPsecDaemonSetAsCreateWaitOnly = isIPsecMachineConfigNotActiveOnUpgrade || (isOVNIPsecActive && !renderIPsecOVN) | |||
renderIPsecDaemonSetAsCreateWaitOnly = (isIPsecMachineConfigNotActiveOnUpgrade && !renderBothIPsecDemonSetsWhenAPoolPausedState) || (isOVNIPsecActive && !renderIPsecOVN) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This condition is counter-intuitive.
What about
...isIPsecMachineConfigNotActiveOnUpgrade || !isIPsecMachineConfigActiveInUnPausedPools ...
Also since you changed the condition, please update the comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the existing condition (isIPsecMachineConfigNotActiveOnUpgrade && !renderBothIPsecDemonSetsWhenAPoolPausedState)
helps the case at which both daemonsets can be rendered without create-wait annotation, it can't be done with the suggested approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I guess you mean to say is that you want to update the daemonsets if machine config is active in unpaused pools and inactive in paused pools. But I am missing the reasoning.
- Updating the deamonsets is something we didn't want to do if the machine config was not updated. Why?
- And why can we update them in the case the pools are paused? Are both these reasonings independent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I guess you mean to say is that you want to update the daemonsets if machine config is active in unpaused pools and inactive in paused pools. But I am missing the reasoning.
- Updating the deamonsets is something we didn't want to do if the machine config was not updated. Why?
This is the main issue we are trying to address with this PR, when ipsec machine config is not active on paused pools, then it updates both host and containerized ipsec daemonsets so that it doesn't block network upgrade and at the same time ipsec is enabled on the dataplane, otherwise it would stick with previous version of ipsec daemonset(s).
- And why can we update them in the case the pools are paused? Are both these reasonings independent?
when pools are paused and ipsec machine config is not active on those pools nodes, then containerized daemonset pod would configure IPsec on those nodes and host flavor pod doesn't have impact at all.
Once these pools are unpaused and ipsec machine config are installed, then it switches back to use host flavor pod.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jcaamano as discussed offline, updated 4.15 PR (#2449) with following:
- Update with both daemonsets as long as ipsec machine config is not active in any of the pool(s).
- Get rid of checking 'paused' pool.
- Remove
LegacyIPsecUpgrade
checks as it's not needed anymore due to update of both daemonsets at the start of upgrade itself.
Would update this PR once IPsec upgrade CI looks clean there.
pkg/network/ovn_kubernetes.go
Outdated
|
||
// The containerized ipsec deployment is only rendered during upgrades or | ||
// for hypershift hosted clusters. | ||
renderIPsecContainerizedDaemonSet = (renderIPsecDaemonSet && isHypershiftHostedCluster) || isIPsecMachineConfigNotActiveOnUpgrade | ||
renderIPsecContainerizedDaemonSet = (renderIPsecDaemonSet && isHypershiftHostedCluster) || isIPsecMachineConfigNotActiveOnUpgrade || |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since you changed the condition, please update the comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
pkg/network/ovn_kubernetes.go
Outdated
// If ipsec is enabled, we render the host ipsec deployment except for | ||
// hypershift hosted clusters and we need to wait for the ipsec MachineConfig | ||
// extensions to be active first. We must also render host ipsec deployment | ||
// at the time of upgrade though user created IPsec Machine Config is not | ||
// present/active. | ||
renderIPsecHostDaemonSet = (renderIPsecDaemonSet && isIPsecMachineConfigActive && !isHypershiftHostedCluster) || isIPsecMachineConfigNotActiveOnUpgrade | ||
renderIPsecHostDaemonSet = (renderIPsecDaemonSet && isIPsecMachineConfigActive && !isHypershiftHostedCluster) || |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since you changed the condition, please update the comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
93d9013
to
90a1608
Compare
/retest |
/test ? |
@pperiyasamy: The following commands are available to trigger required jobs:
The following commands are available to trigger optional jobs:
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
90a1608
to
14c5df7
Compare
pkg/bootstrap/types.go
Outdated
// MasterMCPs contains machine config pools having master role. | ||
MasterMCPs []mcfgv1.MachineConfigPool | ||
|
||
// WorkerMCPStatus contains machine config pool statuses for pools having worker role. | ||
WorkerMCPStatuses []mcfgv1.MachineConfigPoolStatus | ||
// WorkerMCPs contains machine config pools having worker role. | ||
WorkerMCPs []mcfgv1.MachineConfigPool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just keep the statuses? In theory, status should be all we based our decisions on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, right, but now we need to rely on MachineCongPool
for a new unit test covering MachineConfigPool in paused and unpaused states. updated commit message to reflect this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would you need to unit test that if the functionality does not depend on that anymore? You are not really testing any new code path or anything. That should be an e2e test instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes @jcaamano , it makes more sense. reverted back to using only mcp statuses now.
pkg/network/ovn_kubernetes.go
Outdated
// The containerized ipsec deployment is only rendered during upgrades or | ||
// for hypershift hosted clusters. | ||
renderIPsecContainerizedDaemonSet = (renderIPsecDaemonSet && isHypershiftHostedCluster) || isIPsecMachineConfigNotActiveOnUpgrade | ||
// hypershift hosted clusters. We must also render host ipsec daemonset |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you checked that the comment block for the method itself (lines 594-608) is accurate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, updated the method comment about new upgrade behavior.
14c5df7
to
ac0a438
Compare
/retest |
1 similar comment
/retest |
/jira refresh |
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-upgrade openshift/machine-config-operator#4854 |
/assign @trozet |
QE tested following and it looks good
|
When a machine config pool is in a paused state, then it doesn't process any machine config. so during legacy IPsec upgrade (4.14->4.15), IPsec machine configs may not be installed on the nodes when its pool is in paused state. In those cases the network operator continues to render older IPsec daemonsets which blocks network components from not getting upgraded to newer versions. Hence this commit renders newer IPsec daemonsets immediately, with new IPsecCheckForLibreswan check ensures one of the pods serves IPsec for the node. When MCPs are fully rolled out with ipsec machine config, then it goes ahead with rendering only host flavored IPsec daemonset. It brings in new behavior on IPsec daemonset rendering during IPsec deployment, upgrade and node reboot scenarios. 1. Users would notice both daemonsets being rendered at the time of IPsec install (or) upgrade for a temporary period until IPsec machine configs are fully deployed. 2. At the time of node reboot or machine config pool goes into progressing state, both demonsets being rendered. In this scenario, the containerized ipsec daemonset pods are dormant. 3. It removes legacy upgrade case as every upgrade would be considered as the same with this approach. Signed-off-by: Periyasamy Palanisamy <[email protected]>
The previous commit c12cdd4 renders ipsec host daemonset even before machine config are deployed on the node, the ipsec host paths /usr/sbin/ipsec and /usr/libexec/ipsec on the are not available until libreswan installed on the node. So pod fails to come up and goes into pending state because host volume doesn't exist and network co is blocked from moving from a progressing state to an available state. To fix this problem, mount its top level directory into ovn-ipsec container which is a system level directory which is always present. During OCP upgrade from previous 4.15.z to this fix release, with worker pool is in paused state, both network and machine config cluster operator are upgraded into this fix release, the new host ipsec deployment is rendered which has libreswan 4.6 package installed on the container. Since worker node are paused, the host is still having libreswan 4.9 package installed and pluto is with this version. But this is not a problem with this commit, we mount /usr/sbin and /usr/libexec directories, The /usr/sbin/ipsec, /usr/libexec/ipsec/addconn and /usr/libexec/ipsec/_stackmanager commands are used inside the container. The ipsec and _stackmanager are bash scripts which should work without a problem. The addconn is a "C" compiled binary having some dynamic library dependencies and the container uses this command to validate /etc/ipsec.conf file. This must also work because /usr/libexec/ipsec mount was there previously as well. sh-5.1# ldd /usr/sbin/ipsec not a dynamic executable sh-5.1# ldd /usr/libexec/ipsec/_stackmanager not a dynamic executable sh-5.1# ldd /usr/libexec/ipsec/addconn linux-vdso.so.1 (0x00007ffc87bf7000) libunbound.so.8 => /lib64/libunbound.so.8 (0x00007f809f5f3000) libldns.so.3 => /lib64/libldns.so.3 (0x00007f809f58b000) libseccomp.so.2 => /lib64/libseccomp.so.2 (0x00007f809f56b000) libc.so.6 => /lib64/libc.so.6 (0x00007f809f200000) libssl.so.3 => /lib64/libssl.so.3 (0x00007f809f4c5000) libprotobuf-c.so.1 => /lib64/libprotobuf-c.so.1 (0x00007f809f4ba000) libevent-2.1.so.7 => /lib64/libevent-2.1.so.7 (0x00007f809f45f000) libpython3.9.so.1.0 => /lib64/libpython3.9.so.1.0 (0x00007f809ee00000) libcrypto.so.3 => /lib64/libcrypto.so.3 (0x00007f809e800000) libnghttp2.so.14 => /lib64/libnghttp2.so.14 (0x00007f809f435000) /lib64/ld-linux-x86-64.so.2 (0x00007f809f7b3000) libm.so.6 => /lib64/libm.so.6 (0x00007f809ed25000) libz.so.1 => /lib64/libz.so.1 (0x00007f809f41b000) Signed-off-by: Periyasamy Palanisamy <[email protected]>
fb10a6a
to
5392de1
Compare
The CNO started using machine configs from 4.15 for IPsec deployment, so adding a check for machine config operator to be at least >= 4.15 to roll out IPsec machine configs. Otherwise during OCP 4.14->4.15 upgrade, even before MCO is upgraded to 4.15, IPsec machine configs are rolled out, it uses ipsec extension from 4.14 version to install packages, installs libreswan 4.9 version on the node intermeditately. So this MCO version check ensures IPsec machine configs are rendered after MCO is upgraded to 4.15 and nodes get desired libreswan version 4.6. Signed-off-by: Periyasamy Palanisamy <[email protected]>
5392de1
to
cb99bc3
Compare
/lgtm I have some concerns running host binaries against container libraries, particularly when combined with RHEL workers where our coverage might not be as good. I suggested @pperiyasamy to crosscheck with the libreswan team if this is safe to do. |
yes @jcaamano when I just think now about our 4.15 testing that we did with RHEL workers, we would hit the issue https://issues.redhat.com//browse/OCPBUGS-28676 due to |
The rpm db directory is different on rhcos and rhel workers, so mounting /usr/share/rpm directory will not work for rhel worker nodes. To avoid this, this commit checks on ipsec systemd service on the host to decide which ipsec deployment to be active or dormant. Signed-off-by: Periyasamy Palanisamy <[email protected]>
b3ee204
to
11c08e6
Compare
I checked with installer team, starting from 4.19 including 4.19, OCP will not support RHEL worker. cc @gpei |
/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-upgrade openshift/machine-config-operator#4854 |
/test e2e-aws-ovn-ipsec-upgrade |
/test e2e-aws-ovn-ipsec-upgrade |
/retest |
The latest ipsec upgrade job with
This seems to be a known bug https://issues.redhat.com/browse/OCPBUGS-36867. Checked ipsec-connect-wait service and ovn-ipsec-host pod logs, those are clean. Triggered e2e-aws-ovn-ipsec-upgrade job directly now because openshift/machine-config-operator#4854 is already merged (hope this change is effective in CI builds) |
@pperiyasamy: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jcaamano, pperiyasamy The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
1daca87
into
openshift:master
@pperiyasamy: Jira Issue OCPBUGS-52280: All pull requests linked via external trackers have merged: Jira Issue OCPBUGS-52280 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
[ART PR BUILD NOTIFIER] Distgit: cluster-network-operator |
When a machine config pool is in a paused state, then it doesn't process any machine config. so during legacy IPsec upgrade (
4.14->4.15
), IPsec machine configs may not be installed on the nodes when its pool is in paused state.In those cases the network operator continues to render older IPsec daemonsets which blocks network components from not getting upgraded to newer versions.
Hence this PR renders newer IPsec daemonsets immediately, with new
IPsecCheckForLibreswan
check ensures one of the pods serves IPsec for the node. When MCPs are fully rolled out with ipsec machine config, then it goes ahead with rendering only host flavored IPsec daemonset.It brings in new behavior on IPsec daemonset rendering during IPsec deployment, upgrade and node reboot scenarios.
/usr/sbin
and/usr/libexec
instead of specific ipsec host paths. The ipsec paths are available only when libreswan is installed on the node (as mentioned in step 1).4.14->4.15
which moves IPsec from container to host deployment, so adding mco version to be at least 4.15 to start rendering IPsec machine configs.Signed-off-by: Periyasamy Palanisamy [email protected]