PR #6835 - 09-16 09:53

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

411
Total Tests
375
Passed
12
Failed
24
Skipped

Failed Tests

TestNodePool
0s
TestNodePool/HostedCluster0
39m26.3s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-m9l5q/node-pool-jr8rf in 32s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster node-pool-jr8rf util.go:2721: Failed to wait for HostedCluster e2e-clusters-m9l5q/node-pool-jr8rf to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-m9l5q/node-pool-jr8rf invalid at RV 72493 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=False, got ClusterVersionSucceeding=True: FromClusterVersion eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=False, got ClusterVersionAvailable=True: FromClusterVersion(Done applying 4.21.0-0.ci-2025-09-16-102000-test-ci-op-rgf88bqk-latest) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=True, got ClusterVersionProgressing=False: FromClusterVersion(Cluster version is 4.21.0-0.ci-2025-09-16-102000-test-ci-op-rgf88bqk-latest) hypershift_framework.go:239: skipping postTeardown() hypershift_framework.go:220: skipping teardown, already called
TestNodePool/HostedCluster0/Main
20ms
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-m9l5q/node-pool-jr8rf in 0s util.go:276: Successfully waited for kubeconfig secret to have data in 0s util.go:330: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace
18m4.38s
nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest util.go:513: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-ntomachineconfig-inplace in 10m54s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-ntomachineconfig-inplace to have correct status in 0s util.go:427: Successfully waited for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-ntomachineconfig-inplace to start config update in 15s util.go:443: Successfully waited for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-ntomachineconfig-inplace to finish config update in 6m40s nodepool_machineconfig_test.go:166: Successfully waited for all pods in the DaemonSet kube-system/node-pool-jr8rf-test-ntomachineconfig-inplace to be ready in 15.075s util.go:513: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-ntomachineconfig-inplace in 0s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-ntomachineconfig-inplace to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace/EnsureNoCrashingPods
20ms
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-m9l5q/node-pool-jr8rf in 0s util.go:276: Successfully waited for kubeconfig secret to have data in 0s util.go:755: Container hosted-cluster-config-operator in pod hosted-cluster-config-operator-86448dbd87-lgfmh has a restartCount > 0 (1)
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut
20m49.21s
nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest util.go:513: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-ntomachineconfig-replace in 10m54s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-ntomachineconfig-replace to have correct status in 0s util.go:427: Successfully waited for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-ntomachineconfig-replace to start config update in 15s util.go:443: Successfully waited for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-ntomachineconfig-replace to finish config update in 9m40s nodepool_machineconfig_test.go:166: Successfully waited for all pods in the DaemonSet kube-system/node-pool-jr8rf-test-ntomachineconfig-replace to be ready in 0s util.go:513: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-ntomachineconfig-replace in 0s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-ntomachineconfig-replace to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut/EnsureNoCrashingPods
30ms
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-m9l5q/node-pool-jr8rf in 0s util.go:276: Successfully waited for kubeconfig secret to have data in 0s util.go:755: Container hosted-cluster-config-operator in pod hosted-cluster-config-operator-86448dbd87-lgfmh has a restartCount > 0 (1)
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout
20m39.25s
nodepool_machineconfig_test.go:55: Starting test NodePoolMachineconfigRolloutTest util.go:513: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-machineconfig in 10m39s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-machineconfig to have correct status in 0s util.go:427: Successfully waited for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-machineconfig to start config update in 0s util.go:443: Successfully waited for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-machineconfig to finish config update in 10m0s nodepool_machineconfig_test.go:166: Successfully waited for all pods in the DaemonSet kube-system/machineconfig-update-checker-replace to be ready in 0s util.go:513: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-machineconfig in 0s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-m9l5q/node-pool-jr8rf-test-machineconfig to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout/EnsureNoCrashingPods
20ms
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-m9l5q/node-pool-jr8rf in 0s util.go:276: Successfully waited for kubeconfig secret to have data in 0s util.go:755: Container hosted-cluster-config-operator in pod hosted-cluster-config-operator-86448dbd87-lgfmh has a restartCount > 0 (1)
TestUpgradeControlPlane
49m33.35s
control_plane_upgrade_test.go:27: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-rgf88bqk/release@sha256:be1fc409d22effc12ed6905af7b2432950f7e95f572395d8bce201d54568f843, toImage: registry.build01.ci.openshift.org/ci-op-rgf88bqk/release@sha256:fb515f91e2ecd6fd5ccb754721019f10829797645efd0fb5fbd7d2f8e7f2c9ff hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-pjlzl/control-plane-upgrade-lnnbv in 22s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster control-plane-upgrade-lnnbv util.go:2721: Successfully waited for HostedCluster e2e-clusters-pjlzl/control-plane-upgrade-lnnbv to have valid conditions in 0s hypershift_framework.go:239: skipping postTeardown() hypershift_framework.go:220: skipping teardown, already called
TestUpgradeControlPlane/Main
19m10.84s
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pjlzl/control-plane-upgrade-lnnbv in 0s util.go:276: Successfully waited for kubeconfig secret to have data in 0s util.go:330: Successfully waited for a successful connection to the guest API server in 0s control_plane_upgrade_test.go:49: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-rgf88bqk/release@sha256:fb515f91e2ecd6fd5ccb754721019f10829797645efd0fb5fbd7d2f8e7f2c9ff util.go:546: Successfully waited for HostedCluster e2e-clusters-pjlzl/control-plane-upgrade-lnnbv to rollout in 0s
TestUpgradeControlPlane/Main/Verifying_featureGate_status_has_entries_for_the_same_versions_as_clusterVersion
30ms
control_plane_upgrade_test.go:100: version 4.21.0-0.ci-2025-09-16-021925 found in ClusterVersion history but missing in FeatureGate status control_plane_upgrade_test.go:103: Expected the same number of entries in FeatureGate status (1) as in ClusterVersion history (2) Expected <int>: 1 to equal <int>: 2