PR #6058 - 04-18 18:36

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

161
Total Tests
127
Passed
9
Failed
25
Skipped

Failed Tests

TestCreateClusterV2
33m16.77s
hypershift_framework.go:316: Successfully created hostedcluster e2e-clusters-hzj8w/create-cluster-v2-tp9rc in 1m32s hypershift_framework.go:115: Summarizing unexpected conditions for HostedCluster create-cluster-v2-tp9rc util.go:2123: Successfully waited for HostedCluster e2e-clusters-hzj8w/create-cluster-v2-tp9rc to have valid conditions in 25ms hypershift_framework.go:194: skipping postTeardown() hypershift_framework.go:175: skipping teardown, already called
TestCreateClusterV2/Main
5m3.1s
util.go:218: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-hzj8w/create-cluster-v2-tp9rc in 25ms util.go:235: Successfully waited for kubeconfig secret to have data in 25ms util.go:281: Successfully waited for a successful connection to the guest API server in 0s create_cluster_test.go:1265: fetching mgmt kubeconfig util.go:218: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-hzj8w/create-cluster-v2-tp9rc in 25ms util.go:235: Successfully waited for kubeconfig secret to have data in 25ms
TestCreateClusterV2/Main/break-glass-credentials
2m50.45s
TestCreateClusterV2/Main/break-glass-credentials/sre-break-glass
2m8.37s
TestCreateClusterV2/Main/break-glass-credentials/sre-break-glass/direct_fetch
40ms
control_plane_pki_operator.go:63: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-hzj8w-create-cluster-v2-tp9rc/sre-system-admin-client-cert-key control_plane_pki_operator.go:130: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:133: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:150: could not send SSR: Post "https://api-create-cluster-v2-tp9rc.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 130.213.243.52:443: connect: connection refused
TestNodePool
0s
TestNodePool/HostedCluster0
44m15.88s
hypershift_framework.go:316: Successfully created hostedcluster e2e-clusters-d2g9t/node-pool-zmdtt in 1m40s hypershift_framework.go:115: Summarizing unexpected conditions for HostedCluster node-pool-zmdtt eventually.go:104: Failed to get *v1beta1.HostedCluster: client rate limiter Wait returned an error: context deadline exceeded util.go:2123: Failed to wait for HostedCluster e2e-clusters-d2g9t/node-pool-zmdtt to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-d2g9t/node-pool-zmdtt invalid at RV 45528 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=True, got ClusterVersionProgressing=False: FromClusterVersion(Cluster version is 4.19.0-0.ci-2025-04-18-190233-test-ci-op-l4c1hq98-latest) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=False, got ClusterVersionAvailable=True: FromClusterVersion(Done applying 4.19.0-0.ci-2025-04-18-190233-test-ci-op-l4c1hq98-latest) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=False, got ClusterVersionSucceeding=True: FromClusterVersion hypershift_framework.go:194: skipping postTeardown() hypershift_framework.go:175: skipping teardown, already called
TestNodePool/HostedCluster0/Main
360ms
util.go:218: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-d2g9t/node-pool-zmdtt in 25ms util.go:235: Successfully waited for kubeconfig secret to have data in 0s util.go:281: Successfully waited for a successful connection to the guest API server in 300ms
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut
8m59.24s
nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest util.go:462: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-d2g9t/node-pool-zmdtt-test-ntomachineconfig-replace in 6m40.125s nodepool_test.go:350: Successfully waited for NodePool e2e-clusters-d2g9t/node-pool-zmdtt-test-ntomachineconfig-replace to have correct status in 1m19s eventually.go:104: Failed to get *v1beta1.NodePool: client rate limiter Wait returned an error: context deadline exceeded util.go:378: Failed to wait for NodePool e2e-clusters-d2g9t/node-pool-zmdtt-test-ntomachineconfig-replace to start config update in 1m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-d2g9t/node-pool-zmdtt-test-ntomachineconfig-replace invalid at RV 27276 after 1m0s: incorrect condition: wanted UpdatingConfig=True, got UpdatingConfig=False: AsExpected util.go:378: *v1beta1.NodePool e2e-clusters-d2g9t/node-pool-zmdtt-test-ntomachineconfig-replace conditions: util.go:378: ValidGeneratedPayload=True: AsExpected(Payload generated successfully) util.go:378: AllMachinesReady=True: AsExpected(All is well) util.go:378: AllNodesHealthy=True: AsExpected(All is well) util.go:378: UpdateManagementEnabled=True: AsExpected util.go:378: UpdatingConfig=False: AsExpected util.go:378: UpdatingVersion=False: AsExpected util.go:378: ValidArchPlatform=True: AsExpected util.go:378: ReconciliationActive=True: ReconciliationActive(Reconciliation active on resource) util.go:378: ValidMachineConfig=True: AsExpected util.go:378: ReachedIgnitionEndpoint=True: AsExpected util.go:378: AutoscalingEnabled=False: AsExpected util.go:378: ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-l4c1hq98/release@sha256:49b1f3e601b0e80f79b3bdedba3d9b60b938aaf89faf4eb3991e3c78f21ad4b6) util.go:378: ValidTuningConfig=True: AsExpected util.go:378: UpdatingPlatformMachineTemplate=False: AsExpected util.go:378: AutorepairEnabled=False: AsExpected util.go:378: Ready=True: AsExpected