PR #6016 - 04-16 09:10

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

394
Total Tests
359
Passed
14
Failed
21
Skipped

Failed Tests

TestNodePool
0s
TestNodePool/HostedCluster0
57m9.03s
hypershift_framework.go:316: Successfully created hostedcluster e2e-clusters-c7v6f/node-pool-cv6j2 in 30s hypershift_framework.go:115: Summarizing unexpected conditions for HostedCluster node-pool-cv6j2 eventually.go:104: Failed to get *v1beta1.HostedCluster: client rate limiter Wait returned an error: context deadline exceeded util.go:2123: Failed to wait for HostedCluster e2e-clusters-c7v6f/node-pool-cv6j2 to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-c7v6f/node-pool-cv6j2 invalid at RV 129575 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=False, got ClusterVersionSucceeding=True: FromClusterVersion eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=False, got ClusterVersionAvailable=True: FromClusterVersion(Done applying 4.19.0-0.ci-2025-04-16-093224-test-ci-op-kjck2tny-latest) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=True, got ClusterVersionProgressing=False: FromClusterVersion(Cluster version is 4.19.0-0.ci-2025-04-16-093224-test-ci-op-kjck2tny-latest) hypershift_framework.go:194: skipping postTeardown() hypershift_framework.go:175: skipping teardown, already called
TestNodePool/HostedCluster0/Main
50ms
util.go:218: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-c7v6f/node-pool-cv6j2 in 0s util.go:235: Successfully waited for kubeconfig secret to have data in 0s util.go:281: Successfully waited for a successful connection to the guest API server in 25ms
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut
41m33.05s
nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest util.go:462: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-c7v6f/node-pool-cv6j2-test-ntomachineconfig-replace in 21m13s nodepool_test.go:350: Successfully waited for NodePool e2e-clusters-c7v6f/node-pool-cv6j2-test-ntomachineconfig-replace to have correct status in 10s util.go:378: Successfully waited for NodePool e2e-clusters-c7v6f/node-pool-cv6j2-test-ntomachineconfig-replace to start config update in 10s eventually.go:104: Failed to get *v1beta1.NodePool: client rate limiter Wait returned an error: context deadline exceeded util.go:393: Failed to wait for NodePool e2e-clusters-c7v6f/node-pool-cv6j2-test-ntomachineconfig-replace to finish config update in 20m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-c7v6f/node-pool-cv6j2-test-ntomachineconfig-replace invalid at RV 124719 after 20m0s: incorrect condition: wanted UpdatingConfig=False, got UpdatingConfig=True: AsExpected(Updating config in progress. Target config: e4e5dec9) util.go:393: *v1beta1.NodePool e2e-clusters-c7v6f/node-pool-cv6j2-test-ntomachineconfig-replace conditions: util.go:393: ValidArchPlatform=True: AsExpected util.go:393: UpdatingConfig=True: AsExpected(Updating config in progress. Target config: e4e5dec9) util.go:393: ReachedIgnitionEndpoint=True: AsExpected util.go:393: AllMachinesReady=False: Draining(1 of 4 machines are not ready Machine node-pool-cv6j2-test-ntomachineconfig-replace-s68q6-4pgxt: Draining ) util.go:393: AllNodesHealthy=True: AsExpected(All is well) util.go:393: AutoscalingEnabled=False: AsExpected util.go:393: ReconciliationActive=True: ReconciliationActive(Reconciliation active on resource) util.go:393: ValidMachineConfig=True: AsExpected util.go:393: UpdatingVersion=False: AsExpected util.go:393: ValidGeneratedPayload=True: AsExpected(Payload generated successfully) util.go:393: UpdateManagementEnabled=True: AsExpected util.go:393: ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-kjck2tny/release@sha256:b07c269e2015634f36bf5c9188d66933ffd58643a19189453f7afbdb6169b95a) util.go:393: ValidPlatformImage=True: AsExpected(Bootstrap AMI is "ami-0b6b825641a2ea530") util.go:393: AWSSecurityGroupAvailable=True: AsExpected(NodePool has a default security group) util.go:393: ValidTuningConfig=True: AsExpected util.go:393: UpdatingPlatformMachineTemplate=False: AsExpected util.go:393: AutorepairEnabled=False: AsExpected util.go:393: Ready=True: AsExpected
TestNodePool/HostedCluster0/Main/TestNodePoolInPlaceUpgrade
30m0.02s
nodepool_upgrade_test.go:100: starting test NodePoolUpgradeTest eventually.go:258: Failed to get **v1.Node: context deadline exceeded util.go:462: Failed to wait for 1 nodes to become ready for NodePool e2e-clusters-c7v6f/node-pool-cv6j2-test-inplaceupgrade in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 1 nodes, got 0
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN1
30m0.01s
nodepool_prev_release_test.go:31: Starting NodePoolPrevReleaseCreateTest. util.go:462: Failed to wait for 1 nodes to become ready for NodePool e2e-clusters-c7v6f/node-pool-cv6j2-pwlxx in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 1 nodes, got 0
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN2
30m0s
nodepool_prev_release_test.go:31: Starting NodePoolPrevReleaseCreateTest. util.go:462: Failed to wait for 1 nodes to become ready for NodePool e2e-clusters-c7v6f/node-pool-cv6j2-k4cx9 in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 1 nodes, got 0
TestNodePool/HostedCluster0/Main/TestNodePoolReplaceUpgrade
30m0.02s
nodepool_upgrade_test.go:100: starting test NodePoolUpgradeTest eventually.go:258: Failed to get **v1.Node: context deadline exceeded util.go:462: Failed to wait for 1 nodes to become ready for NodePool e2e-clusters-c7v6f/node-pool-cv6j2-test-replaceupgrade in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 1 nodes, got 0
TestNodePool/HostedCluster2
1h3m38.47s
hypershift_framework.go:316: Successfully created hostedcluster e2e-clusters-q9vq4/node-pool-gbq2g in 2m6s hypershift_framework.go:115: Summarizing unexpected conditions for HostedCluster node-pool-gbq2g util.go:2123: Failed to wait for HostedCluster e2e-clusters-q9vq4/node-pool-gbq2g to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-q9vq4/node-pool-gbq2g invalid at RV 131253 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=True, got ClusterVersionProgressing=False: FromClusterVersion(Cluster version is 4.19.0-0.ci-2025-04-16-093224-test-ci-op-kjck2tny-latest) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=False, got ClusterVersionSucceeding=True: FromClusterVersion eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=False, got ClusterVersionAvailable=True: FromClusterVersion(Done applying 4.19.0-0.ci-2025-04-16-093224-test-ci-op-kjck2tny-latest) hypershift_framework.go:194: skipping postTeardown() hypershift_framework.go:175: skipping teardown, already called
TestNodePool/HostedCluster2/Main
30ms
util.go:218: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-q9vq4/node-pool-gbq2g in 0s util.go:235: Successfully waited for kubeconfig secret to have data in 0s util.go:281: Successfully waited for a successful connection to the guest API server in 25ms
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation
35m57.05s
nodepool_additionalTrustBundlePropagation_test.go:36: Starting AdditionalTrustBundlePropagationTest. util.go:462: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-q9vq4/node-pool-gbq2g-test-additional-trust-bundle-propagation in 5m39s nodepool_test.go:350: Successfully waited for NodePool e2e-clusters-q9vq4/node-pool-gbq2g-test-additional-trust-bundle-propagation to have correct status in 8s eventually.go:104: Failed to get *v1beta1.NodePool: client rate limiter Wait returned an error: context deadline exceeded nodepool_test.go:350: Failed to wait for NodePool e2e-clusters-q9vq4/node-pool-gbq2g-test-additional-trust-bundle-propagation to have correct status in 10m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-q9vq4/node-pool-gbq2g-test-additional-trust-bundle-propagation invalid at RV 127789 after 10m0s: eventually.go:227: - incorrect condition: wanted AllMachinesReady=True, got AllMachinesReady=False: Draining(1 of 2 machines are not ready Machine node-pool-gbq2g-test-additional-trust-bundle-propagation-v9z9tx: Draining ) eventually.go:227: - incorrect condition: wanted UpdatingConfig=False, got UpdatingConfig=True: AsExpected(Updating config in progress. Target config: f9f4572e)
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation/AdditionalTrustBundlePropagationTest
20m10.03s
nodepool_additionalTrustBundlePropagation_test.go:70: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle nodepool_additionalTrustBundlePropagation_test.go:78: Successfully waited for Waiting for NodePool e2e-clusters-q9vq4/node-pool-gbq2g-test-additional-trust-bundle-propagation to begin updating in 10s eventually.go:104: Failed to get *v1beta1.NodePool: client rate limiter Wait returned an error: context deadline exceeded nodepool_additionalTrustBundlePropagation_test.go:92: Failed to wait for Waiting for NodePool e2e-clusters-q9vq4/node-pool-gbq2g-test-additional-trust-bundle-propagation to stop updating in 20m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-q9vq4/node-pool-gbq2g-test-additional-trust-bundle-propagation invalid at RV 127789 after 20m0s: incorrect condition: wanted UpdatingConfig=False, got UpdatingConfig=True: AsExpected(Updating config in progress. Target config: f9f4572e) nodepool_additionalTrustBundlePropagation_test.go:92: *v1beta1.NodePool e2e-clusters-q9vq4/node-pool-gbq2g-test-additional-trust-bundle-propagation conditions: nodepool_additionalTrustBundlePropagation_test.go:92: ReconciliationActive=True: ReconciliationActive(Reconciliation active on resource) nodepool_additionalTrustBundlePropagation_test.go:92: ValidMachineConfig=True: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: UpdatingConfig=True: AsExpected(Updating config in progress. Target config: f9f4572e) nodepool_additionalTrustBundlePropagation_test.go:92: UpdatingVersion=False: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: ValidGeneratedPayload=True: AsExpected(Payload generated successfully) nodepool_additionalTrustBundlePropagation_test.go:92: AutoscalingEnabled=False: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-kjck2tny/release@sha256:b07c269e2015634f36bf5c9188d66933ffd58643a19189453f7afbdb6169b95a) nodepool_additionalTrustBundlePropagation_test.go:92: ValidArchPlatform=True: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: ReachedIgnitionEndpoint=True: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: AllMachinesReady=False: Draining(1 of 2 machines are not ready Machine node-pool-gbq2g-test-additional-trust-bundle-propagation-v9z9tx: Draining ) nodepool_additionalTrustBundlePropagation_test.go:92: AllNodesHealthy=True: AsExpected(All is well) nodepool_additionalTrustBundlePropagation_test.go:92: UpdateManagementEnabled=True: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: ValidPlatformImage=True: AsExpected(Bootstrap AMI is "ami-0b6b825641a2ea530") nodepool_additionalTrustBundlePropagation_test.go:92: AWSSecurityGroupAvailable=True: AsExpected(NodePool has a default security group) nodepool_additionalTrustBundlePropagation_test.go:92: ValidTuningConfig=True: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: UpdatingPlatformMachineTemplate=False: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: Ready=True: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: AutorepairEnabled=False: AsExpected
TestUpgradeControlPlane
33m20.69s
control_plane_upgrade_test.go:23: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-kjck2tny/release@sha256:df4da7ff30e4d78c9bee39f38cf41bc0b78e044e52f373eec4c1bbcc308aaa5e, toImage: registry.build01.ci.openshift.org/ci-op-kjck2tny/release@sha256:b07c269e2015634f36bf5c9188d66933ffd58643a19189453f7afbdb6169b95a hypershift_framework.go:316: Successfully created hostedcluster e2e-clusters-scb5w/control-plane-upgrade-zs8rm in 2m1s hypershift_framework.go:115: Summarizing unexpected conditions for HostedCluster control-plane-upgrade-zs8rm eventually.go:104: Failed to get *v1beta1.HostedCluster: client rate limiter Wait returned an error: context deadline exceeded util.go:2123: Failed to wait for HostedCluster e2e-clusters-scb5w/control-plane-upgrade-zs8rm to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-scb5w/control-plane-upgrade-zs8rm invalid at RV 77843 after 2s: incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(openshift-apiserver deployment has 1 unavailable replicas) hypershift_framework.go:194: skipping postTeardown() hypershift_framework.go:175: skipping teardown, already called
TestUpgradeControlPlane/ValidateHostedCluster
23m23.05s
util.go:218: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-scb5w/control-plane-upgrade-zs8rm in 2m5s util.go:235: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-zs8rm.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-zs8rm.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:281: Successfully waited for a successful connection to the guest API server in 1m43.025s util.go:462: Successfully waited for 2 nodes to become ready in 6m9s util.go:494: Successfully waited for HostedCluster e2e-clusters-scb5w/control-plane-upgrade-zs8rm to rollout in 3m26s eventually.go:104: Failed to get *v1beta1.HostedCluster: client rate limiter Wait returned an error: context deadline exceeded util.go:2123: Failed to wait for HostedCluster e2e-clusters-scb5w/control-plane-upgrade-zs8rm to have valid conditions in 10m0s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-scb5w/control-plane-upgrade-zs8rm invalid at RV 77843 after 10m0s: incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(openshift-apiserver deployment has 1 unavailable replicas)