PR #6016 - 04-15 15:00

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

428
Total Tests
396
Passed
11
Failed
21
Skipped

Failed Tests

TestCreateCluster
35m34.64s
create_cluster_test.go:1183: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:316: Successfully created hostedcluster e2e-clusters-ss42c/create-cluster-kqhr6 in 20s hypershift_framework.go:115: Summarizing unexpected conditions for HostedCluster create-cluster-kqhr6 util.go:2123: Successfully waited for HostedCluster e2e-clusters-ss42c/create-cluster-kqhr6 to have valid conditions in 0s hypershift_framework.go:194: skipping postTeardown() hypershift_framework.go:175: skipping teardown, already called
TestCreateCluster/Main
5m52.41s
util.go:218: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-ss42c/create-cluster-kqhr6 in 0s util.go:235: Successfully waited for kubeconfig secret to have data in 0s util.go:281: Successfully waited for a successful connection to the guest API server in 0s create_cluster_test.go:1201: fetching mgmt kubeconfig util.go:218: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-ss42c/create-cluster-kqhr6 in 0s util.go:235: Successfully waited for kubeconfig secret to have data in 0s
TestCreateCluster/Main/break-glass-credentials
2m9.15s
TestCreateCluster/Main/break-glass-credentials/sre-break-glass
3m43.14s
TestCreateCluster/Main/break-glass-credentials/sre-break-glass/CSR_flow
3m43.11s
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:201: creating CSR "24tpymq17znqhzqf3qzhldogl99o2jlq61oxhwtvmy8f" for signer "sre-break-glass", requesting client auth usages control_plane_pki_operator.go:211: creating CSRA e2e-clusters-ss42c-create-cluster-kqhr6/24tpymq17znqhzqf3qzhldogl99o2jlq61oxhwtvmy8f to trigger automatic approval of the CSR control_plane_pki_operator.go:218: Successfully waited for CSR "24tpymq17znqhzqf3qzhldogl99o2jlq61oxhwtvmy8f" to be approved and signed in 1s control_plane_pki_operator.go:130: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:133: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:153: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/Main/break-glass-credentials/sre-break-glass/CSR_flow/revocation
3m41.02s
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:253: creating CRR e2e-clusters-ss42c-create-cluster-kqhr6/266a1j9cd3tiav46v52j2htjpscv09qe69v9wyea7p5a to trigger signer certificate revocation control_plane_pki_operator.go:260: Successfully waited for CRR e2e-clusters-ss42c-create-cluster-kqhr6/266a1j9cd3tiav46v52j2htjpscv09qe69v9wyea7p5a to complete in 3m11s control_plane_pki_operator.go:273: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:276: issuing SSR to confirm that we're not authorized to contact the server control_plane_pki_operator.go:279: expected an unauthorized error, got Post "https://api-create-cluster-kqhr6.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.45.6.109:443: i/o timeout, response &v1.SelfSubjectReview{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Status:v1.SelfSubjectReviewStatus{UserInfo:v1.UserInfo{Username:"", UID:"", Groups:[]string(nil), Extra:map[string]v1.ExtraValue(nil)}}}
TestNodePool
0s
TestNodePool/HostedCluster2
1h0m13.39s
hypershift_framework.go:316: Successfully created hostedcluster e2e-clusters-9h2dk/node-pool-fklq5 in 28s hypershift_framework.go:115: Summarizing unexpected conditions for HostedCluster node-pool-fklq5 eventually.go:104: Failed to get *v1beta1.HostedCluster: client rate limiter Wait returned an error: context deadline exceeded util.go:2123: Failed to wait for HostedCluster e2e-clusters-9h2dk/node-pool-fklq5 to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-9h2dk/node-pool-fklq5 invalid at RV 143624 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=True, got ClusterVersionProgressing=False: FromClusterVersion(Cluster version is 4.19.0-0.ci-2025-04-15-152632-test-ci-op-z8md8ktq-latest) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=False, got ClusterVersionAvailable=True: FromClusterVersion(Done applying 4.19.0-0.ci-2025-04-15-152632-test-ci-op-z8md8ktq-latest) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=False, got ClusterVersionSucceeding=True: FromClusterVersion hypershift_framework.go:194: skipping postTeardown() hypershift_framework.go:175: skipping teardown, already called
TestNodePool/HostedCluster2/Main
40ms
util.go:218: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-9h2dk/node-pool-fklq5 in 0s util.go:235: Successfully waited for kubeconfig secret to have data in 0s util.go:281: Successfully waited for a successful connection to the guest API server in 25ms
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation
35m47.05s
nodepool_additionalTrustBundlePropagation_test.go:36: Starting AdditionalTrustBundlePropagationTest. util.go:462: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-9h2dk/node-pool-fklq5-test-additional-trust-bundle-propagation in 5m28s nodepool_test.go:350: Successfully waited for NodePool e2e-clusters-9h2dk/node-pool-fklq5-test-additional-trust-bundle-propagation to have correct status in 9s eventually.go:104: Failed to get *v1beta1.NodePool: client rate limiter Wait returned an error: context deadline exceeded nodepool_test.go:350: Failed to wait for NodePool e2e-clusters-9h2dk/node-pool-fklq5-test-additional-trust-bundle-propagation to have correct status in 10m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-9h2dk/node-pool-fklq5-test-additional-trust-bundle-propagation invalid at RV 121479 after 10m0s: eventually.go:227: - incorrect condition: wanted UpdatingConfig=False, got UpdatingConfig=True: AsExpected(Updating config in progress. Target config: 79ea1b65) eventually.go:227: - incorrect condition: wanted AllMachinesReady=True, got AllMachinesReady=False: Draining(1 of 2 machines are not ready Machine node-pool-fklq5-test-additional-trust-bundle-propagation-pldlfb: Draining )
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation/AdditionalTrustBundlePropagationTest
20m10.03s
nodepool_additionalTrustBundlePropagation_test.go:70: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle nodepool_additionalTrustBundlePropagation_test.go:78: Successfully waited for Waiting for NodePool e2e-clusters-9h2dk/node-pool-fklq5-test-additional-trust-bundle-propagation to begin updating in 10s nodepool_additionalTrustBundlePropagation_test.go:92: Failed to wait for Waiting for NodePool e2e-clusters-9h2dk/node-pool-fklq5-test-additional-trust-bundle-propagation to stop updating in 20m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-9h2dk/node-pool-fklq5-test-additional-trust-bundle-propagation invalid at RV 121479 after 20m0s: incorrect condition: wanted UpdatingConfig=False, got UpdatingConfig=True: AsExpected(Updating config in progress. Target config: 79ea1b65) nodepool_additionalTrustBundlePropagation_test.go:92: *v1beta1.NodePool e2e-clusters-9h2dk/node-pool-fklq5-test-additional-trust-bundle-propagation conditions: nodepool_additionalTrustBundlePropagation_test.go:92: ValidMachineConfig=True: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: AutoscalingEnabled=False: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: UpdateManagementEnabled=True: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-z8md8ktq/release@sha256:3f1b28407c3693bf0c765517feb30b0cbf82f5e4003cbde3348f8b5d57c98f2b) nodepool_additionalTrustBundlePropagation_test.go:92: ValidArchPlatform=True: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: ReconciliationActive=True: ReconciliationActive(Reconciliation active on resource) nodepool_additionalTrustBundlePropagation_test.go:92: UpdatingConfig=True: AsExpected(Updating config in progress. Target config: 79ea1b65) nodepool_additionalTrustBundlePropagation_test.go:92: UpdatingVersion=False: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: ValidGeneratedPayload=True: AsExpected(Payload generated successfully) nodepool_additionalTrustBundlePropagation_test.go:92: ReachedIgnitionEndpoint=True: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: AllMachinesReady=False: Draining(1 of 2 machines are not ready Machine node-pool-fklq5-test-additional-trust-bundle-propagation-pldlfb: Draining ) nodepool_additionalTrustBundlePropagation_test.go:92: AllNodesHealthy=True: AsExpected(All is well) nodepool_additionalTrustBundlePropagation_test.go:92: ValidPlatformImage=True: AsExpected(Bootstrap AMI is "ami-0b6b825641a2ea530") nodepool_additionalTrustBundlePropagation_test.go:92: AWSSecurityGroupAvailable=True: AsExpected(NodePool has a default security group) nodepool_additionalTrustBundlePropagation_test.go:92: ValidTuningConfig=True: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: UpdatingPlatformMachineTemplate=False: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: Ready=True: AsExpected nodepool_additionalTrustBundlePropagation_test.go:92: AutorepairEnabled=False: AsExpected