Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-52280: Move to use newer IPsec DaemonSets irrespective of MCP state #2454

Merged

Conversation

pperiyasamy
Copy link
Member

@pperiyasamy pperiyasamy commented Aug 1, 2024

When a machine config pool is in a paused state, then it doesn't process any machine config. so during legacy IPsec upgrade (4.14->4.15), IPsec machine configs may not be installed on the nodes when its pool is in paused state.
In those cases the network operator continues to render older IPsec daemonsets which blocks network components from not getting upgraded to newer versions.

Hence this PR renders newer IPsec daemonsets immediately, with new IPsecCheckForLibreswan check ensures one of the pods serves IPsec for the node. When MCPs are fully rolled out with ipsec machine config, then it goes ahead with rendering only host flavored IPsec daemonset.

It brings in new behavior on IPsec daemonset rendering during IPsec deployment, upgrade and node reboot scenarios.

  1. Users would notice both daemonsets being rendered at the time of IPsec install (or) upgrade for a temporary period until IPsec machine configs are fully deployed.
  2. At the time of node reboot or machine config pool goes into progressing state, both daemonsets being rendered. In this scenario, the containerized ipsec daemonset pods are dormant.
  3. It removes legacy upgrade case as every upgrade would be considered as the same with this approach.
  4. It now mounts top level system directories /usr/sbin and /usr/libexec instead of specific ipsec host paths. The ipsec paths are available only when libreswan is installed on the node (as mentioned in step 1).
  5. For the upgrades 4.14->4.15 which moves IPsec from container to host deployment, so adding mco version to be at least 4.15 to start rendering IPsec machine configs.

Signed-off-by: Periyasamy Palanisamy [email protected]

@openshift-ci-robot openshift-ci-robot added jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. labels Aug 1, 2024
@openshift-ci-robot
Copy link
Contributor

@pperiyasamy: This pull request references Jira Issue OCPBUGS-36688, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.17.0) matches configured target version for branch (4.17.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @anuragthehatter

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

When a machine config pool is in paused state, network operator currently does the following.

  1. During fresh IPsec install, it just keeps waiting IPsec machine config to be rolled out on all cluster nodes, only then it starts rendering IPsec host daemonset. This would get dataplane into IPsec encrypted state. So as long as any of the machine config pool is paused state, the cluster never gets IPsec enabled.

  2. During legacy upgrade, let's say from 4.15 to 4.16, it just continues to render older 4.14 IPsec daemonsets which blocks network cluster operator not getting upgraded to 4.15 (this scenario may not happen when user upgrades IPsec from 4.15 to 4.16)

Hence this PR renders both newer IPsec daemonsets during this MCP pause period. When MCPs are moved to unpaused state and also IPsec machine configs are installed on it, then it goes ahead with rendering only host flavored IPsec daemonset.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@pperiyasamy
Copy link
Member Author

@pperiyasamy
Copy link
Member Author

/retest

@pperiyasamy
Copy link
Member Author

/test e2e-aws-ovn-ipsec-upgrade

@pperiyasamy
Copy link
Member Author

/test e2e-ovn-ipsec-step-registry

@openshift-ci-robot
Copy link
Contributor

@pperiyasamy: This pull request references Jira Issue OCPBUGS-36688, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.17.0) matches configured target version for branch (4.17.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @anuragthehatter

In response to this:

When a machine config pool is in paused state, network operator currently does the following.

  1. During fresh IPsec install, it just keeps waiting IPsec machine config to be rolled out on all cluster nodes, only then it starts rendering IPsec host daemonset. This would get dataplane into IPsec encrypted state. So as long as any of the machine config pool is paused state, the cluster never gets IPsec enabled.

  2. During legacy upgrade, let's say from 4.14 to 4.15, it just continues to render older 4.14 IPsec daemonsets which blocks network cluster operator not getting upgraded to 4.15 (this scenario may not happen when user upgrades IPsec from 4.15 to 4.16)

Hence this PR renders both newer IPsec daemonsets during this MCP pause period. When MCPs are moved to unpaused state and also IPsec machine configs are installed on it, then it goes ahead with rendering only host flavored IPsec daemonset.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Contributor

@jcaamano jcaamano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am fine with checking the "paused" spec field of the pools for now.

@@ -288,6 +309,12 @@ spec:
- -c
- |
#!/bin/bash
{{ if .IPsecCheckForLibreswan }}
if rpm --dbpath=/usr/share/rpm -q libreswan; then
echo "host has libreswan and therefore ipsec will be configured by ipsec host daemonset, this ovn ipsec container is always \"alive\""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean here with is always alive?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is just to keep liveness probe to succeed every time (when host flavor actually serving ipsec as host already installed with libreswan), otherwise this pod would crashloop.

data.Data["IPsecMachineConfigEnable"] = IPsecMachineConfigEnable
data.Data["OVNIPsecDaemonsetEnable"] = OVNIPsecDaemonsetEnable
data.Data["OVNIPsecEnable"] = OVNIPsecEnable
data.Data["IPsecCheckForLibreswan"] = renderBothIPsecDemonSetsWhenAPoolPausedState
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couldn't this just be

data.Data["IPsecCheckForLibreswan"] = renderIPsecHostDaemonSet && renderIPsecContainerizedDaemonSet

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, done.

Comment on lines 624 to 625
machineConfigPoolPaused := isThereAnyMachineConfigPoolPaused(bootstrapResult.Infra)
isIPsecMachineConfigActiveInUnPausedPools := isIPsecMachineConfigActive(bootstrapResult.Infra, true)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would move these two variables to the same block where renderBothIPsecDemonSetsWhenAPoolPausedState is defined. And then I would elaborate a bit more the comment of that block saying that if there are unpaused pools, we wait until thos poolls have the ipsec machine config active before deploying both daemonsets

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -653,7 +664,7 @@ func shouldRenderIPsec(conf *operv1.OVNKubernetesConfig, bootstrapResult *bootst

// While OVN ipsec is being upgraded and IPsec MachineConfigs deployment is in progress
// (or) IPsec config in OVN is being disabled, then ipsec deployment is not updated.
renderIPsecDaemonSetAsCreateWaitOnly = isIPsecMachineConfigNotActiveOnUpgrade || (isOVNIPsecActive && !renderIPsecOVN)
renderIPsecDaemonSetAsCreateWaitOnly = (isIPsecMachineConfigNotActiveOnUpgrade && !renderBothIPsecDemonSetsWhenAPoolPausedState) || (isOVNIPsecActive && !renderIPsecOVN)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This condition is counter-intuitive.

What about

...isIPsecMachineConfigNotActiveOnUpgrade || !isIPsecMachineConfigActiveInUnPausedPools ... 

Also since you changed the condition, please update the comment

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the existing condition (isIPsecMachineConfigNotActiveOnUpgrade && !renderBothIPsecDemonSetsWhenAPoolPausedState) helps the case at which both daemonsets can be rendered without create-wait annotation, it can't be done with the suggested approach.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I guess you mean to say is that you want to update the daemonsets if machine config is active in unpaused pools and inactive in paused pools. But I am missing the reasoning.

  • Updating the deamonsets is something we didn't want to do if the machine config was not updated. Why?
  • And why can we update them in the case the pools are paused? Are both these reasonings independent?

Copy link
Member Author

@pperiyasamy pperiyasamy Aug 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I guess you mean to say is that you want to update the daemonsets if machine config is active in unpaused pools and inactive in paused pools. But I am missing the reasoning.

  • Updating the deamonsets is something we didn't want to do if the machine config was not updated. Why?

This is the main issue we are trying to address with this PR, when ipsec machine config is not active on paused pools, then it updates both host and containerized ipsec daemonsets so that it doesn't block network upgrade and at the same time ipsec is enabled on the dataplane, otherwise it would stick with previous version of ipsec daemonset(s).

  • And why can we update them in the case the pools are paused? Are both these reasonings independent?

when pools are paused and ipsec machine config is not active on those pools nodes, then containerized daemonset pod would configure IPsec on those nodes and host flavor pod doesn't have impact at all.
Once these pools are unpaused and ipsec machine config are installed, then it switches back to use host flavor pod.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jcaamano as discussed offline, updated 4.15 PR (#2449) with following:

  1. Update with both daemonsets as long as ipsec machine config is not active in any of the pool(s).
  2. Get rid of checking 'paused' pool.
  3. Remove LegacyIPsecUpgrade checks as it's not needed anymore due to update of both daemonsets at the start of upgrade itself.

Would update this PR once IPsec upgrade CI looks clean there.


// The containerized ipsec deployment is only rendered during upgrades or
// for hypershift hosted clusters.
renderIPsecContainerizedDaemonSet = (renderIPsecDaemonSet && isHypershiftHostedCluster) || isIPsecMachineConfigNotActiveOnUpgrade
renderIPsecContainerizedDaemonSet = (renderIPsecDaemonSet && isHypershiftHostedCluster) || isIPsecMachineConfigNotActiveOnUpgrade ||
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since you changed the condition, please update the comment

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

// If ipsec is enabled, we render the host ipsec deployment except for
// hypershift hosted clusters and we need to wait for the ipsec MachineConfig
// extensions to be active first. We must also render host ipsec deployment
// at the time of upgrade though user created IPsec Machine Config is not
// present/active.
renderIPsecHostDaemonSet = (renderIPsecDaemonSet && isIPsecMachineConfigActive && !isHypershiftHostedCluster) || isIPsecMachineConfigNotActiveOnUpgrade
renderIPsecHostDaemonSet = (renderIPsecDaemonSet && isIPsecMachineConfigActive && !isHypershiftHostedCluster) ||
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since you changed the condition, please update the comment

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@pperiyasamy pperiyasamy force-pushed the mcp-pause-ipsec-4.17 branch from 93d9013 to 90a1608 Compare August 10, 2024 00:30
@pperiyasamy
Copy link
Member Author

/retest

@pperiyasamy
Copy link
Member Author

/test ?

Copy link
Contributor

openshift-ci bot commented Aug 21, 2024

@pperiyasamy: The following commands are available to trigger required jobs:

  • /test 4.18-upgrade-from-stable-4.17-images
  • /test e2e-aws-ovn-hypershift-conformance
  • /test e2e-aws-ovn-upgrade
  • /test e2e-aws-ovn-windows
  • /test e2e-azure-ovn-upgrade
  • /test e2e-gcp-ovn
  • /test e2e-gcp-ovn-upgrade
  • /test e2e-metal-ipi-ovn-ipv6
  • /test images
  • /test lint
  • /test unit
  • /test verify

The following commands are available to trigger optional jobs:

  • /test 4.18-upgrade-from-stable-4.17-e2e-aws-ovn-upgrade
  • /test 4.18-upgrade-from-stable-4.17-e2e-azure-ovn-upgrade
  • /test 4.18-upgrade-from-stable-4.17-e2e-gcp-ovn-upgrade
  • /test e2e-aws-hypershift-ovn-kubevirt
  • /test e2e-aws-ovn-ipsec-serial
  • /test e2e-aws-ovn-ipsec-upgrade
  • /test e2e-aws-ovn-local-to-shared-gateway-mode-migration
  • /test e2e-aws-ovn-serial
  • /test e2e-aws-ovn-shared-to-local-gateway-mode-migration
  • /test e2e-aws-ovn-single-node
  • /test e2e-aws-ovn-techpreview-serial
  • /test e2e-azure-ovn
  • /test e2e-azure-ovn-dualstack
  • /test e2e-azure-ovn-manual-oidc
  • /test e2e-gcp-ovn-techpreview
  • /test e2e-metal-ipi-ovn-ipv6-ipsec
  • /test e2e-network-mtu-migration-ovn-ipv4
  • /test e2e-network-mtu-migration-ovn-ipv6
  • /test e2e-openstack-ovn
  • /test e2e-ovn-hybrid-step-registry
  • /test e2e-ovn-ipsec-step-registry
  • /test e2e-ovn-step-registry
  • /test e2e-vsphere-ovn
  • /test e2e-vsphere-ovn-dualstack
  • /test e2e-vsphere-ovn-dualstack-primaryv6
  • /test e2e-vsphere-ovn-windows
  • /test okd-scos-images
  • /test qe-perfscale-aws-ovn-medium-cluster-density
  • /test qe-perfscale-aws-ovn-medium-node-density-cni
  • /test qe-perfscale-aws-ovn-small-cluster-density
  • /test qe-perfscale-aws-ovn-small-node-density-cni
  • /test security

Use /test all to run the following jobs that were automatically triggered:

  • pull-ci-openshift-cluster-network-operator-master-4.18-upgrade-from-stable-4.17-e2e-aws-ovn-upgrade
  • pull-ci-openshift-cluster-network-operator-master-4.18-upgrade-from-stable-4.17-e2e-azure-ovn-upgrade
  • pull-ci-openshift-cluster-network-operator-master-4.18-upgrade-from-stable-4.17-e2e-gcp-ovn-upgrade
  • pull-ci-openshift-cluster-network-operator-master-4.18-upgrade-from-stable-4.17-images
  • pull-ci-openshift-cluster-network-operator-master-e2e-aws-hypershift-ovn-kubevirt
  • pull-ci-openshift-cluster-network-operator-master-e2e-aws-ovn-hypershift-conformance
  • pull-ci-openshift-cluster-network-operator-master-e2e-aws-ovn-local-to-shared-gateway-mode-migration
  • pull-ci-openshift-cluster-network-operator-master-e2e-aws-ovn-serial
  • pull-ci-openshift-cluster-network-operator-master-e2e-aws-ovn-shared-to-local-gateway-mode-migration
  • pull-ci-openshift-cluster-network-operator-master-e2e-aws-ovn-single-node
  • pull-ci-openshift-cluster-network-operator-master-e2e-aws-ovn-upgrade
  • pull-ci-openshift-cluster-network-operator-master-e2e-aws-ovn-windows
  • pull-ci-openshift-cluster-network-operator-master-e2e-azure-ovn
  • pull-ci-openshift-cluster-network-operator-master-e2e-azure-ovn-upgrade
  • pull-ci-openshift-cluster-network-operator-master-e2e-gcp-ovn
  • pull-ci-openshift-cluster-network-operator-master-e2e-gcp-ovn-upgrade
  • pull-ci-openshift-cluster-network-operator-master-e2e-metal-ipi-ovn-ipv6
  • pull-ci-openshift-cluster-network-operator-master-e2e-metal-ipi-ovn-ipv6-ipsec
  • pull-ci-openshift-cluster-network-operator-master-e2e-network-mtu-migration-ovn-ipv4
  • pull-ci-openshift-cluster-network-operator-master-e2e-network-mtu-migration-ovn-ipv6
  • pull-ci-openshift-cluster-network-operator-master-e2e-openstack-ovn
  • pull-ci-openshift-cluster-network-operator-master-e2e-ovn-hybrid-step-registry
  • pull-ci-openshift-cluster-network-operator-master-e2e-ovn-ipsec-step-registry
  • pull-ci-openshift-cluster-network-operator-master-e2e-ovn-step-registry
  • pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn
  • pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack
  • pull-ci-openshift-cluster-network-operator-master-e2e-vsphere-ovn-dualstack-primaryv6
  • pull-ci-openshift-cluster-network-operator-master-images
  • pull-ci-openshift-cluster-network-operator-master-lint
  • pull-ci-openshift-cluster-network-operator-master-security
  • pull-ci-openshift-cluster-network-operator-master-unit
  • pull-ci-openshift-cluster-network-operator-master-verify

In response to this:

/test ?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Comment on lines 113 to 117
// MasterMCPs contains machine config pools having master role.
MasterMCPs []mcfgv1.MachineConfigPool

// WorkerMCPStatus contains machine config pool statuses for pools having worker role.
WorkerMCPStatuses []mcfgv1.MachineConfigPoolStatus
// WorkerMCPs contains machine config pools having worker role.
WorkerMCPs []mcfgv1.MachineConfigPool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we just keep the statuses? In theory, status should be all we based our decisions on.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, right, but now we need to rely on MachineCongPool for a new unit test covering MachineConfigPool in paused and unpaused states. updated commit message to reflect this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would you need to unit test that if the functionality does not depend on that anymore? You are not really testing any new code path or anything. That should be an e2e test instead.

Copy link
Member Author

@pperiyasamy pperiyasamy Oct 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes @jcaamano , it makes more sense. reverted back to using only mcp statuses now.

// The containerized ipsec deployment is only rendered during upgrades or
// for hypershift hosted clusters.
renderIPsecContainerizedDaemonSet = (renderIPsecDaemonSet && isHypershiftHostedCluster) || isIPsecMachineConfigNotActiveOnUpgrade
// hypershift hosted clusters. We must also render host ipsec daemonset
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you checked that the comment block for the method itself (lines 594-608) is accurate?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, updated the method comment about new upgrade behavior.

@pperiyasamy
Copy link
Member Author

/retest

1 similar comment
@pperiyasamy
Copy link
Member Author

/retest

@pperiyasamy
Copy link
Member Author

/jira refresh

@openshift-ci-robot openshift-ci-robot removed the jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. label Sep 16, 2024
@pperiyasamy
Copy link
Member Author

/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-upgrade openshift/machine-config-operator#4854

@pperiyasamy
Copy link
Member Author

/assign @trozet

@anuragthehatter
Copy link

QE tested following and it looks good

Upgrade test:
1. Install 4.18, Enable IPsec, make any one of the machine config pool into paused state.
2. Start the OCP upgrade to the PR build.
3. You may notice both ovn-ipsec-containerized and ovn-ipsec-host ds are rendered when mcp goes into progressing state.
3. When OCP upgrade finishes, cluster is rendered only with ovn-ipsec-host daemonset (because even paused pool is also installed with ipsec machine config).

IPsec enable test:
1. Bring up OCP cluster from PR.
2. make a machine config pool into paused state.
3. Enable IPsec.
5. Ensure IPsec machine configs are installed on unpaused pools, both ipsec DS are rendered.
6. Unpause the mc pool, IPsec machine config will be installed on the unpaused pool now.
7. At the end cluster is rendered only with ovn-ipsec-host ds.

When a machine config pool is in a paused state, then it doesn't process any
machine config. so during legacy IPsec upgrade (4.14->4.15), IPsec machine
configs may not be installed on the nodes when its pool is in paused state.
In those cases the network operator continues to render older IPsec daemonsets
which blocks network components from not getting upgraded to newer versions.
Hence this commit renders newer IPsec daemonsets immediately, with new
IPsecCheckForLibreswan check ensures one of the pods serves IPsec for the
node. When MCPs are fully rolled out with ipsec machine config, then it
goes ahead with rendering only host flavored IPsec daemonset.

It brings in new behavior on IPsec daemonset rendering during IPsec deployment,
upgrade and node reboot scenarios.

1. Users would notice both daemonsets being rendered at the time of IPsec install
(or) upgrade for a temporary period until IPsec machine configs are fully deployed.
2. At the time of node reboot or machine config pool goes into progressing state,
both demonsets being rendered. In this scenario, the containerized ipsec daemonset
pods are dormant.
3. It removes legacy upgrade case as every upgrade would be considered as the
same with this approach.

Signed-off-by: Periyasamy Palanisamy <[email protected]>
The previous commit c12cdd4 renders ipsec host
daemonset even before machine config are deployed on the node, the ipsec host
paths /usr/sbin/ipsec and /usr/libexec/ipsec on the are not available until
libreswan installed on the node. So pod fails to come up and goes into pending
state because host volume doesn't exist and network co is blocked from moving
from a progressing state to an available state. To fix this problem, mount its
top level directory into ovn-ipsec container which is a system level directory
which is always present.

During OCP upgrade from previous 4.15.z to this fix release, with worker pool is
in paused state, both network and machine config cluster operator are upgraded
into this fix release, the new host ipsec deployment is rendered which has
libreswan 4.6 package installed on the container. Since worker node are paused,
the host is still having libreswan 4.9 package installed and pluto is with this
version. But this is not a problem with this commit, we mount /usr/sbin and
/usr/libexec directories, The /usr/sbin/ipsec, /usr/libexec/ipsec/addconn and
/usr/libexec/ipsec/_stackmanager commands are used inside the container.
The ipsec and _stackmanager are bash scripts which should work without a problem.
The addconn is a "C" compiled binary having some dynamic library dependencies and
the container uses this command to validate /etc/ipsec.conf file. This must also
work because /usr/libexec/ipsec mount was there previously as well.

sh-5.1# ldd /usr/sbin/ipsec
	not a dynamic executable
sh-5.1# ldd /usr/libexec/ipsec/_stackmanager
	not a dynamic executable
sh-5.1# ldd /usr/libexec/ipsec/addconn
	linux-vdso.so.1 (0x00007ffc87bf7000)
	libunbound.so.8 => /lib64/libunbound.so.8 (0x00007f809f5f3000)
	libldns.so.3 => /lib64/libldns.so.3 (0x00007f809f58b000)
	libseccomp.so.2 => /lib64/libseccomp.so.2 (0x00007f809f56b000)
	libc.so.6 => /lib64/libc.so.6 (0x00007f809f200000)
	libssl.so.3 => /lib64/libssl.so.3 (0x00007f809f4c5000)
	libprotobuf-c.so.1 => /lib64/libprotobuf-c.so.1 (0x00007f809f4ba000)
	libevent-2.1.so.7 => /lib64/libevent-2.1.so.7 (0x00007f809f45f000)
	libpython3.9.so.1.0 => /lib64/libpython3.9.so.1.0 (0x00007f809ee00000)
	libcrypto.so.3 => /lib64/libcrypto.so.3 (0x00007f809e800000)
	libnghttp2.so.14 => /lib64/libnghttp2.so.14 (0x00007f809f435000)
	/lib64/ld-linux-x86-64.so.2 (0x00007f809f7b3000)
	libm.so.6 => /lib64/libm.so.6 (0x00007f809ed25000)
	libz.so.1 => /lib64/libz.so.1 (0x00007f809f41b000)

Signed-off-by: Periyasamy Palanisamy <[email protected]>
@pperiyasamy pperiyasamy force-pushed the mcp-pause-ipsec-4.17 branch from fb10a6a to 5392de1 Compare March 17, 2025 18:59
The CNO started using machine configs from 4.15 for IPsec deployment, so adding
a check for machine config operator to be at least >= 4.15 to roll out IPsec
machine configs. Otherwise during OCP 4.14->4.15 upgrade, even before MCO is
upgraded to 4.15, IPsec machine configs are rolled out, it uses ipsec extension
from 4.14 version to install packages, installs libreswan 4.9 version on the
node intermeditately. So this MCO version check ensures IPsec machine configs
are rendered after MCO is upgraded to 4.15 and nodes get desired libreswan
version 4.6.

Signed-off-by: Periyasamy Palanisamy <[email protected]>
@pperiyasamy pperiyasamy force-pushed the mcp-pause-ipsec-4.17 branch from 5392de1 to cb99bc3 Compare March 17, 2025 19:03
@jcaamano
Copy link
Contributor

/lgtm

I have some concerns running host binaries against container libraries, particularly when combined with RHEL workers where our coverage might not be as good. I suggested @pperiyasamy to crosscheck with the libreswan team if this is safe to do.

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Mar 17, 2025
@pperiyasamy
Copy link
Member Author

particularly when combined with RHEL workers where our coverage might not be as good.

yes @jcaamano when I just think now about our 4.15 testing that we did with RHEL workers, we would hit the issue https://issues.redhat.com//browse/OCPBUGS-28676 due to /usr/share/rpm path. will have to fix this. thanks for bringing this up.

@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Mar 17, 2025
The rpm db directory is different on rhcos and rhel workers, so mounting
/usr/share/rpm directory will not work for rhel worker nodes. To avoid
this, this commit checks on ipsec systemd service on the host to decide
which ipsec deployment to be active or dormant.

Signed-off-by: Periyasamy Palanisamy <[email protected]>
@pperiyasamy pperiyasamy force-pushed the mcp-pause-ipsec-4.17 branch from b3ee204 to 11c08e6 Compare March 17, 2025 22:23
@huiran0826
Copy link

huiran0826 commented Mar 18, 2025

I checked with installer team, starting from 4.19 including 4.19, OCP will not support RHEL worker. cc @gpei

@pperiyasamy
Copy link
Member Author

/testwith openshift/cluster-network-operator/master/e2e-aws-ovn-ipsec-upgrade openshift/machine-config-operator#4854

@pperiyasamy
Copy link
Member Author

/test e2e-aws-ovn-ipsec-upgrade
/test e2e-aws-ovn-ipsec-serial

@pperiyasamy
Copy link
Member Author

/test e2e-aws-ovn-ipsec-upgrade

@pperiyasamy
Copy link
Member Author

/retest

@pperiyasamy
Copy link
Member Author

The latest ipsec upgrade job with testwith command is failing with below monitor error (though upgrade went through fine)

static pod lifecycle failure - static pod: "kube-apiserver" in namespace: "openshift-kube-apiserver" for revision: 2 on node: "ip-10-0-80-248.ec2.internal" didn't show up, waited: 5m44s

This seems to be a known bug https://issues.redhat.com/browse/OCPBUGS-36867. Checked ipsec-connect-wait service and ovn-ipsec-host pod logs, those are clean.

Triggered e2e-aws-ovn-ipsec-upgrade job directly now because openshift/machine-config-operator#4854 is already merged (hope this change is effective in CI builds)

Copy link
Contributor

openshift-ci bot commented Mar 19, 2025

@pperiyasamy: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/4.18-upgrade-from-stable-4.17-e2e-azure-ovn-upgrade c973d6b link false /test 4.18-upgrade-from-stable-4.17-e2e-azure-ovn-upgrade
ci/prow/e2e-vsphere-ovn-dualstack-primaryv6 11c08e6 link false /test e2e-vsphere-ovn-dualstack-primaryv6
ci/prow/security 11c08e6 link false /test security
ci/prow/e2e-aws-hypershift-ovn-kubevirt 11c08e6 link false /test e2e-aws-hypershift-ovn-kubevirt
ci/prow/e2e-aws-ovn-serial 11c08e6 link false /test e2e-aws-ovn-serial

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@jcaamano
Copy link
Contributor

/lgtm
/hold cancel

@openshift-ci openshift-ci bot added lgtm Indicates that a PR is ready to be merged. and removed do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. labels Mar 19, 2025
Copy link
Contributor

openshift-ci bot commented Mar 19, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jcaamano, pperiyasamy

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-bot openshift-merge-bot bot merged commit 1daca87 into openshift:master Mar 19, 2025
33 of 38 checks passed
@openshift-ci-robot
Copy link
Contributor

@pperiyasamy: Jira Issue OCPBUGS-52280: All pull requests linked via external trackers have merged:

Jira Issue OCPBUGS-52280 has been moved to the MODIFIED state.

In response to this:

When a machine config pool is in a paused state, then it doesn't process any machine config. so during legacy IPsec upgrade (4.14->4.15), IPsec machine configs may not be installed on the nodes when its pool is in paused state.
In those cases the network operator continues to render older IPsec daemonsets which blocks network components from not getting upgraded to newer versions.

Hence this PR renders newer IPsec daemonsets immediately, with new IPsecCheckForLibreswan check ensures one of the pods serves IPsec for the node. When MCPs are fully rolled out with ipsec machine config, then it goes ahead with rendering only host flavored IPsec daemonset.

It brings in new behavior on IPsec daemonset rendering during IPsec deployment, upgrade and node reboot scenarios.

  1. Users would notice both daemonsets being rendered at the time of IPsec install (or) upgrade for a temporary period until IPsec machine configs are fully deployed.
  2. At the time of node reboot or machine config pool goes into progressing state, both daemonsets being rendered. In this scenario, the containerized ipsec daemonset pods are dormant.
  3. It removes legacy upgrade case as every upgrade would be considered as the same with this approach.
  4. It now mounts top level system directories /usr/sbin and /usr/libexec instead of specific ipsec host paths. The ipsec paths are available only when libreswan is installed on the node (as mentioned in step 1).
  5. For the upgrades 4.14->4.15 which moves IPsec from container to host deployment, so adding mco version to be at least 4.15 to start rendering IPsec machine configs.

Signed-off-by: Periyasamy Palanisamy [email protected]

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-bot
Copy link
Contributor

[ART PR BUILD NOTIFIER]

Distgit: cluster-network-operator
This PR has been included in build cluster-network-operator-container-v4.19.0-202503191342.p0.g1daca87.assembly.stream.el9.
All builds following this will include this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants