PR #7100 - 11-06 19:27

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

328
Total Tests
290
Passed
5
Failed
33
Skipped

Failed Tests

TestNodePool
0s
TestNodePool/HostedCluster0
1h10m43.27s
hypershift_framework.go:423: Successfully created hostedcluster e2e-clusters-6ww9c/node-pool-httxs in 2m21s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster node-pool-httxs util.go:2835: Failed to wait for HostedCluster e2e-clusters-6ww9c/node-pool-httxs to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-6ww9c/node-pool-httxs invalid at RV 75173 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=True, got ClusterVersionProgressing=False: FromClusterVersion(Cluster version is 4.21.0-0.ci-2025-11-06-195539-test-ci-op-38nj2db7-latest) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=False, got ClusterVersionSucceeding=True: FromClusterVersion eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=False, got ClusterVersionAvailable=True: FromClusterVersion(Done applying 4.21.0-0.ci-2025-11-06-195539-test-ci-op-38nj2db7-latest) hypershift_framework.go:242: skipping postTeardown() hypershift_framework.go:223: skipping teardown, already called
TestNodePool/HostedCluster0/Main
160ms
util.go:280: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6ww9c/node-pool-httxs in 50ms util.go:297: Successfully waited for kubeconfig secret to have data in 25ms util.go:359: Successfully waited for a successful connection to the guest API server in 50ms
TestNodePool/HostedCluster0/Main/TestNodePoolReplaceUpgrade
31m46.48s
nodepool_upgrade_test.go:100: starting test NodePoolUpgradeTest util.go:541: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6ww9c/node-pool-httxs-test-replaceupgrade in 11m42.075s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-6ww9c/node-pool-httxs-test-replaceupgrade to have correct status in 25ms nodepool_upgrade_test.go:160: Validating all Nodes have the synced labels and taints nodepool_upgrade_test.go:163: Successfully waited for NodePool e2e-clusters-6ww9c/node-pool-httxs-test-replaceupgrade to have version 4.21.0-0.ci-2025-11-06-012216 in 25ms nodepool_upgrade_test.go:180: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-38nj2db7/release@sha256:4cbb357d9977c6f590360d201a86a4194fe1fc846bc0dab7ae15afc4af8c8777 nodepool_upgrade_test.go:187: Successfully waited for NodePool e2e-clusters-6ww9c/node-pool-httxs-test-replaceupgrade to start the upgrade in 3.025s nodepool_upgrade_test.go:200: Failed to wait for NodePool e2e-clusters-6ww9c/node-pool-httxs-test-replaceupgrade to have version 4.21.0-0.ci-2025-11-06-195539-test-ci-op-38nj2db7-latest in 20m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-6ww9c/node-pool-httxs-test-replaceupgrade invalid at RV 73473 after 20m0s: eventually.go:227: - wanted version 4.21.0-0.ci-2025-11-06-195539-test-ci-op-38nj2db7-latest, got 4.21.0-0.ci-2025-11-06-012216 eventually.go:227: - incorrect condition: wanted UpdatingVersion=False, got UpdatingVersion=True: AsExpected(Updating version in progress. Target version: 4.21.0-0.ci-2025-11-06-195539-test-ci-op-38nj2db7-latest) nodepool_upgrade_test.go:200: *v1beta1.NodePool e2e-clusters-6ww9c/node-pool-httxs-test-replaceupgrade conditions: nodepool_upgrade_test.go:200: UpdateManagementEnabled=True: AsExpected nodepool_upgrade_test.go:200: ReconciliationActive=True: ReconciliationActive(Reconciliation active on resource) nodepool_upgrade_test.go:200: ValidMachineConfig=True: AsExpected nodepool_upgrade_test.go:200: ValidGeneratedPayload=True: AsExpected(Payload generated successfully) nodepool_upgrade_test.go:200: ReachedIgnitionEndpoint=True: AsExpected nodepool_upgrade_test.go:200: AutoscalingEnabled=False: AsExpected nodepool_upgrade_test.go:200: ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-38nj2db7/release@sha256:4cbb357d9977c6f590360d201a86a4194fe1fc846bc0dab7ae15afc4af8c8777) nodepool_upgrade_test.go:200: ValidArchPlatform=True: AsExpected nodepool_upgrade_test.go:200: UpdatingConfig=False: AsExpected nodepool_upgrade_test.go:200: UpdatingVersion=True: AsExpected(Updating version in progress. Target version: 4.21.0-0.ci-2025-11-06-195539-test-ci-op-38nj2db7-latest) nodepool_upgrade_test.go:200: AllMachinesReady=True: AsExpected(All is well) nodepool_upgrade_test.go:200: AllNodesHealthy=False: NodeProvisioning(Machine node-pool-httxs-test-replaceupgrade-npgs2-kwm2r: NodeProvisioning ) nodepool_upgrade_test.go:200: ValidPlatformConfig=True: AsExpected(All is well) nodepool_upgrade_test.go:200: ValidTuningConfig=True: AsExpected nodepool_upgrade_test.go:200: UpdatingPlatformMachineTemplate=False: AsExpected nodepool_upgrade_test.go:200: AutorepairEnabled=False: AsExpected nodepool_upgrade_test.go:200: Ready=True: AsExpected
TestNodePool/HostedCluster0/Main/TestRollingUpgrade
30m0.04s
eventually.go:258: Failed to get **v1.Node: context deadline exceeded util.go:541: Failed to wait for 2 nodes to become ready for NodePool e2e-clusters-6ww9c/node-pool-httxs-test-rolling-upgrade in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 2 nodes, got 1 eventually.go:400: - observed **v1.Node /node-pool-httxs-test-rolling-upgrade-dsjqc-bcn9n invalid: incorrect condition: wanted Ready=True, got Ready=False: KubeletNotReady(container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) util.go:541: *v1.Node /node-pool-httxs-test-rolling-upgrade-dsjqc-bcn9n conditions: util.go:541: MemoryPressure=False: KubeletHasSufficientMemory(kubelet has sufficient memory available) util.go:541: DiskPressure=False: KubeletHasNoDiskPressure(kubelet has no disk pressure) util.go:541: PIDPressure=False: KubeletHasSufficientPID(kubelet has sufficient PID available) util.go:541: Ready=False: KubeletNotReady(container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)