PR #6860 - 09-18 16:13

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

91
Total Tests
53
Passed
21
Failed
17
Skipped

Failed Tests

TestAutoscaling
54m40.54s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-26rll/autoscaling-gjckc in 28s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster autoscaling-gjckc util.go:2721: Failed to wait for HostedCluster e2e-clusters-26rll/autoscaling-gjckc to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-26rll/autoscaling-gjckc invalid at RV 75288 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted AWSEndpointAvailable=True, got AWSEndpointAvailable=False: AWSError(Throttling: Rate exceeded status code: 400, request id: 297ebe70-596a-4a9b-80cb-9129da97d5dd) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available) eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(capi-provider deployment has 2 unavailable replicas) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.21.0-0.ci-2025-09-18-164010-test-ci-op-zh88zzkl-latest: some cluster operators are not available) hypershift_framework.go:239: skipping postTeardown() hypershift_framework.go:220: skipping teardown, already called
TestAutoscaling/ValidateHostedCluster
33m6.03s
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-26rll/autoscaling-gjckc in 1m18s util.go:276: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-gjckc.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-autoscaling-gjckc.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-gjckc.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.89.65.163:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-gjckc.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.84.160.245:443: i/o timeout util.go:330: Successfully waited for a successful connection to the guest API server in 1m48.025s eventually.go:258: Failed to get **v1.Node: context deadline exceeded util.go:513: Failed to wait for 1 nodes to become ready in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 1 nodes, got 0
TestCreateCluster
48m56.31s
create_cluster_test.go:1832: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-2n7fd/create-cluster-shnm7 in 44s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster create-cluster-shnm7 util.go:2721: Failed to wait for HostedCluster e2e-clusters-2n7fd/create-cluster-shnm7 to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-2n7fd/create-cluster-shnm7 invalid at RV 75395 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.21.0-0.ci-2025-09-18-164010-test-ci-op-zh88zzkl-latest: some cluster operators are not available) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available) eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(capi-provider deployment has 2 unavailable replicas) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion hypershift_framework.go:239: skipping postTeardown() hypershift_framework.go:220: skipping teardown, already called
TestCreateCluster/ValidateHostedCluster
33m41.06s
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2n7fd/create-cluster-shnm7 in 1m42s util.go:276: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-shnm7.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-create-cluster-shnm7.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-shnm7.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 50.19.2.56:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-shnm7.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.88.203.87:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-shnm7.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 50.19.2.56:443: i/o timeout util.go:330: Successfully waited for a successful connection to the guest API server in 1m59.05s eventually.go:258: Failed to get **v1.Node: context deadline exceeded util.go:513: Failed to wait for 3 nodes to become ready in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 3 nodes, got 0
TestCreateClusterCustomConfig
48m51.74s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-2gx7z/custom-config-nt4l5 in 53s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster custom-config-nt4l5 util.go:2721: Failed to wait for HostedCluster e2e-clusters-2gx7z/custom-config-nt4l5 to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-2gx7z/custom-config-nt4l5 invalid at RV 77650 after 2s: eventually.go:227: - incorrect condition: wanted AWSEndpointAvailable=True, got AWSEndpointAvailable=False: AWSError(Throttling: Rate exceeded status code: 400, request id: bbdae92e-8010-44c0-a2e9-bc961a54d05a) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators dns, kube-storage-version-migrator, monitoring, service-ca, storage are not available) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(capi-provider deployment has 2 unavailable replicas) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.21.0-0.ci-2025-09-18-164010-test-ci-op-zh88zzkl-latest: some cluster operators are not available) hypershift_framework.go:239: skipping postTeardown() hypershift_framework.go:220: skipping teardown, already called
TestCreateClusterCustomConfig/ValidateHostedCluster
34m45.03s
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2gx7z/custom-config-nt4l5 in 2m18s util.go:276: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-nt4l5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-custom-config-nt4l5.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-nt4l5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.210.24.159:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-nt4l5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.89.141.251:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-nt4l5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.210.24.159:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-nt4l5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.89.141.251:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-nt4l5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.210.24.159:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-nt4l5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.89.141.251:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-nt4l5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.210.24.159:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-nt4l5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.89.141.251:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-nt4l5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.210.24.159:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-nt4l5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 50.16.145.147:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-nt4l5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.210.24.159:443: connect: connection refused util.go:330: Successfully waited for a successful connection to the guest API server in 2m27.025s util.go:513: Failed to wait for 2 nodes to become ready in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 2 nodes, got 0
TestCreateClusterPrivate
52m1.6s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-lk2gv/private-bd278 in 46s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster private-bd278 util.go:2721: Failed to wait for HostedCluster e2e-clusters-lk2gv/private-bd278 to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-lk2gv/private-bd278 invalid at RV 75193 after 2s: eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(capi-provider deployment has 2 unavailable replicas) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.21.0-0.ci-2025-09-18-164010-test-ci-op-zh88zzkl-latest: some cluster operators are not available) hypershift_framework.go:239: skipping postTeardown() hypershift_framework.go:220: skipping teardown, already called
TestCreateClusterPrivate/ValidateHostedCluster
32m39.01s
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lk2gv/private-bd278 in 2m39s util.go:276: Successfully waited for kubeconfig secret to have data in 0s util.go:643: Failed to wait for NodePools for HostedCluster e2e-clusters-lk2gv/private-bd278 to have all of their desired nodes in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1beta1.NodePool state after 30m0s eventually.go:400: - observed **v1beta1.NodePool e2e-clusters-lk2gv/private-bd278-us-east-1a invalid: expected 2 replicas, got 0 util.go:643: *v1beta1.NodePool e2e-clusters-lk2gv/private-bd278-us-east-1a conditions: util.go:643: UpdateManagementEnabled=True: AsExpected util.go:643: ValidArchPlatform=True: AsExpected util.go:643: ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-zh88zzkl/release@sha256:a37301a3118491783f3edf0a2f73a99c9acc49b5eb63b9dd2b8d34406a6efdfd) util.go:643: AllMachinesReady=False: WaitingForInfrastructure(2 of 2 machines are not ready Machine private-bd278-us-east-1a-9lwq6-2xqqx: WaitingForInfrastructure: Machine private-bd278-us-east-1a-9lwq6-h7cb8: WaitingForInfrastructure: ) util.go:643: AllNodesHealthy=False: WaitingForNodeRef(Machine private-bd278-us-east-1a-9lwq6-2xqqx: WaitingForNodeRef Machine private-bd278-us-east-1a-9lwq6-h7cb8: WaitingForNodeRef ) util.go:643: ReconciliationActive=True: ReconciliationActive(Reconciliation active on resource) util.go:643: AutoscalingEnabled=False: AsExpected util.go:643: ValidPlatformConfig=True: AsExpected(All is well) util.go:643: UpdatingVersion=True: AsExpected(Updating version in progress. Target version: 4.21.0-0.ci-2025-09-18-164010-test-ci-op-zh88zzkl-latest) util.go:643: ValidMachineConfig=True: AsExpected util.go:643: ValidGeneratedPayload=True: AsExpected(Payload generated successfully) util.go:643: ReachedIgnitionEndpoint=False: ignitionNotReached util.go:643: UpdatingConfig=True: AsExpected(Updating config in progress. Target config: 0d33dd1c) util.go:643: ValidPlatformImage=True: AsExpected(Bootstrap AMI is "ami-09d23adad19cdb25c") util.go:643: AWSSecurityGroupAvailable=True: AsExpected(NodePool has a default security group) util.go:643: ValidTuningConfig=True: AsExpected util.go:643: UpdatingPlatformMachineTemplate=True: AsExpected(platform machine template update in progress. Target template: private-bd278-us-east-1a-27c4e354) util.go:643: AutorepairEnabled=False: AsExpected util.go:643: Ready=False: WaitingForAvailableMachines(Minimum availability requires 2 replicas, current 0 available)
TestCreateClusterPrivateWithRouteKAS
52m46.22s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-kjp5v/private-x52jl in 25s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster private-x52jl util.go:2721: Failed to wait for HostedCluster e2e-clusters-kjp5v/private-x52jl to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-kjp5v/private-x52jl invalid at RV 73478 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.21.0-0.ci-2025-09-18-164010-test-ci-op-zh88zzkl-latest: some cluster operators are not available) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available) eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(capi-provider deployment has 2 unavailable replicas) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion hypershift_framework.go:239: skipping postTeardown() hypershift_framework.go:220: skipping teardown, already called
TestCreateClusterPrivateWithRouteKAS/ValidateHostedCluster
31m21.01s
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-kjp5v/private-x52jl in 1m21s util.go:276: Successfully waited for kubeconfig secret to have data in 0s eventually.go:258: Failed to get **v1beta1.NodePool: client rate limiter Wait returned an error: context deadline exceeded util.go:643: Failed to wait for NodePools for HostedCluster e2e-clusters-kjp5v/private-x52jl to have all of their desired nodes in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1beta1.NodePool state after 30m0s eventually.go:400: - observed **v1beta1.NodePool e2e-clusters-kjp5v/private-x52jl-us-east-1c invalid: expected 2 replicas, got 0 util.go:643: *v1beta1.NodePool e2e-clusters-kjp5v/private-x52jl-us-east-1c conditions: util.go:643: AutoscalingEnabled=False: AsExpected util.go:643: ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-zh88zzkl/release@sha256:a37301a3118491783f3edf0a2f73a99c9acc49b5eb63b9dd2b8d34406a6efdfd) util.go:643: ReconciliationActive=True: ReconciliationActive(Reconciliation active on resource) util.go:643: UpdateManagementEnabled=True: AsExpected util.go:643: ValidArchPlatform=True: AsExpected util.go:643: UpdatingVersion=True: AsExpected(Updating version in progress. Target version: 4.21.0-0.ci-2025-09-18-164010-test-ci-op-zh88zzkl-latest) util.go:643: AllMachinesReady=False: WaitingForInfrastructure(2 of 2 machines are not ready Machine private-x52jl-us-east-1c-w65jw-c6k5s: WaitingForInfrastructure: Machine private-x52jl-us-east-1c-w65jw-hlkxv: WaitingForInfrastructure: ) util.go:643: AllNodesHealthy=False: WaitingForNodeRef(Machine private-x52jl-us-east-1c-w65jw-c6k5s: WaitingForNodeRef Machine private-x52jl-us-east-1c-w65jw-hlkxv: WaitingForNodeRef ) util.go:643: ValidPlatformConfig=True: AsExpected(All is well) util.go:643: ValidMachineConfig=True: AsExpected util.go:643: UpdatingConfig=True: AsExpected(Updating config in progress. Target config: 69b9cb30) util.go:643: ValidGeneratedPayload=True: AsExpected(Payload generated successfully) util.go:643: ReachedIgnitionEndpoint=False: ignitionNotReached util.go:643: ValidPlatformImage=True: AsExpected(Bootstrap AMI is "ami-09d23adad19cdb25c") util.go:643: AWSSecurityGroupAvailable=True: AsExpected(NodePool has a default security group) util.go:643: ValidTuningConfig=True: AsExpected util.go:643: UpdatingPlatformMachineTemplate=True: AsExpected(platform machine template update in progress. Target template: private-x52jl-us-east-1c-2838346e) util.go:643: AutorepairEnabled=False: AsExpected util.go:643: Ready=False: WaitingForAvailableMachines(Minimum availability requires 2 replicas, current 0 available)
TestCreateClusterProxy
1h6m1.9s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-6fdcl/proxy-7t86s in 37s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster proxy-7t86s util.go:2721: Failed to wait for HostedCluster e2e-clusters-6fdcl/proxy-7t86s to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-6fdcl/proxy-7t86s invalid at RV 76182 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.21.0-0.ci-2025-09-18-164010-test-ci-op-zh88zzkl-latest: some cluster operators are not available) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(capi-provider deployment has 2 unavailable replicas) eventually.go:227: - incorrect condition: wanted AWSEndpointAvailable=True, got AWSEndpointAvailable=False: AWSError(Throttling: Rate exceeded status code: 400, request id: ae683be0-f5b5-44b3-9cc3-d5b120c92dbe) hypershift_framework.go:239: skipping postTeardown() hypershift_framework.go:220: skipping teardown, already called
TestCreateClusterProxy/ValidateHostedCluster
33m19.03s
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6fdcl/proxy-7t86s in 1m42s util.go:276: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-7t86s.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-proxy-7t86s.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-7t86s.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.83.4.109:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-7t86s.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.212.164.27:443: i/o timeout util.go:330: Successfully waited for a successful connection to the guest API server in 1m37.025s eventually.go:258: Failed to get **v1.Node: context deadline exceeded util.go:513: Failed to wait for 2 nodes to become ready in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 2 nodes, got 0
TestCreateClusterRequestServingIsolation
1h5m42.68s
requestserving.go:105: Created request serving nodepool clusters/524045dd5d42635723ac-mgmt-reqserving-cqr8s requestserving.go:105: Created request serving nodepool clusters/524045dd5d42635723ac-mgmt-reqserving-gdn6m requestserving.go:113: Created non request serving nodepool clusters/524045dd5d42635723ac-mgmt-non-reqserving-lhtnq requestserving.go:113: Created non request serving nodepool clusters/524045dd5d42635723ac-mgmt-non-reqserving-qqxwr requestserving.go:113: Created non request serving nodepool clusters/524045dd5d42635723ac-mgmt-non-reqserving-zs56q util.go:513: Successfully waited for 1 nodes to become ready for NodePool clusters/524045dd5d42635723ac-mgmt-reqserving-cqr8s in 3m24s util.go:513: Successfully waited for 1 nodes to become ready for NodePool clusters/524045dd5d42635723ac-mgmt-reqserving-gdn6m in 27s util.go:513: Successfully waited for 1 nodes to become ready for NodePool clusters/524045dd5d42635723ac-mgmt-non-reqserving-lhtnq in 0s util.go:513: Successfully waited for 1 nodes to become ready for NodePool clusters/524045dd5d42635723ac-mgmt-non-reqserving-qqxwr in 48s util.go:513: Successfully waited for 1 nodes to become ready for NodePool clusters/524045dd5d42635723ac-mgmt-non-reqserving-zs56q in 100ms create_cluster_test.go:1975: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-sksbz/request-serving-isolation-bmzk5 in 16s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster request-serving-isolation-bmzk5 util.go:2721: Failed to wait for HostedCluster e2e-clusters-sksbz/request-serving-isolation-bmzk5 to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-sksbz/request-serving-isolation-bmzk5 invalid at RV 84154 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available) eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(capi-provider deployment has 2 unavailable replicas) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.21.0-0.ci-2025-09-18-164010-test-ci-op-zh88zzkl-latest: some cluster operators are not available) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted AWSEndpointAvailable=True, got AWSEndpointAvailable=False: AWSError(Throttling: Rate exceeded status code: 400, request id: dc6ffb3a-afba-4652-a2ca-311c1bc027dc) hypershift_framework.go:239: skipping postTeardown() requestserving.go:132: Tearing down custom nodepool clusters/524045dd5d42635723ac-mgmt-reqserving-cqr8s requestserving.go:132: Tearing down custom nodepool clusters/524045dd5d42635723ac-mgmt-reqserving-gdn6m requestserving.go:132: Tearing down custom nodepool clusters/524045dd5d42635723ac-mgmt-non-reqserving-lhtnq requestserving.go:132: Tearing down custom nodepool clusters/524045dd5d42635723ac-mgmt-non-reqserving-qqxwr requestserving.go:132: Tearing down custom nodepool clusters/524045dd5d42635723ac-mgmt-non-reqserving-zs56q hypershift_framework.go:220: skipping teardown, already called
TestCreateClusterRequestServingIsolation/ValidateHostedCluster
33m33.04s
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-sksbz/request-serving-isolation-bmzk5 in 1m57s util.go:276: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-bmzk5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-request-serving-isolation-bmzk5.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-bmzk5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.78.246:443: i/o timeout util.go:330: Successfully waited for a successful connection to the guest API server in 1m36.025s util.go:513: Failed to wait for 3 nodes to become ready in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 3 nodes, got 0
TestNodePool
0s
TestNodePool/HostedCluster0
19m17.48s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-4b6np/node-pool-rd99g in 36s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster node-pool-rd99g util.go:2721: Failed to wait for HostedCluster e2e-clusters-4b6np/node-pool-rd99g to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-4b6np/node-pool-rd99g invalid at RV 45226 after 2s: incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(capi-provider deployment has 2 unavailable replicas) hypershift_framework.go:239: skipping postTeardown() hypershift_framework.go:220: skipping teardown, already called
TestNodePool/HostedCluster0/ValidateHostedCluster
12m59.08s
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-4b6np/node-pool-rd99g in 1m42s util.go:276: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-rd99g.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-rd99g.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-rd99g.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.210.147.45:443: i/o timeout util.go:330: Successfully waited for a successful connection to the guest API server in 1m17.05s util.go:513: Successfully waited for 0 nodes to become ready in 0s util.go:2721: Failed to wait for HostedCluster e2e-clusters-4b6np/node-pool-rd99g to have valid conditions in 10m0s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-4b6np/node-pool-rd99g invalid at RV 45226 after 10m0s: incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(capi-provider deployment has 2 unavailable replicas)
TestNodePool/HostedCluster2
49m47.66s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-46wv2/node-pool-gtdnk in 41s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster node-pool-gtdnk util.go:2721: Failed to wait for HostedCluster e2e-clusters-46wv2/node-pool-gtdnk to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-46wv2/node-pool-gtdnk invalid at RV 42544 after 2s: incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(capi-provider deployment has 2 unavailable replicas) hypershift_framework.go:239: skipping postTeardown() hypershift_framework.go:220: skipping teardown, already called
TestNodePool/HostedCluster2/ValidateHostedCluster
13m18.07s
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-46wv2/node-pool-gtdnk in 1m45s util.go:276: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-gtdnk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-gtdnk.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-gtdnk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.85.149.13:443: i/o timeout util.go:330: Successfully waited for a successful connection to the guest API server in 1m33.05s util.go:513: Successfully waited for 0 nodes to become ready in 0s util.go:2721: Failed to wait for HostedCluster e2e-clusters-46wv2/node-pool-gtdnk to have valid conditions in 10m0s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-46wv2/node-pool-gtdnk invalid at RV 42544 after 10m0s: incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(capi-provider deployment has 2 unavailable replicas)
TestUpgradeControlPlane
58m14.02s
control_plane_upgrade_test.go:27: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-zh88zzkl/release@sha256:205f5a9c06c60fd599e922ca2f6a4fbfd2eeac8bc0fd774b6e092c4dfbd5c0c7, toImage: registry.build01.ci.openshift.org/ci-op-zh88zzkl/release@sha256:a37301a3118491783f3edf0a2f73a99c9acc49b5eb63b9dd2b8d34406a6efdfd hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-kbk64/control-plane-upgrade-w5xws in 23s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster control-plane-upgrade-w5xws util.go:2721: Failed to wait for HostedCluster e2e-clusters-kbk64/control-plane-upgrade-w5xws to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-kbk64/control-plane-upgrade-w5xws invalid at RV 77269 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted AWSEndpointAvailable=True, got AWSEndpointAvailable=False: AWSError(Throttling: Rate exceeded status code: 400, request id: ce4672f2-b4b1-4c7c-bd6f-89497b432ef0) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.21.0-0.ci-2025-09-18-021925: some cluster operators are not available) eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(capi-provider deployment has 2 unavailable replicas) hypershift_framework.go:239: skipping postTeardown() hypershift_framework.go:220: skipping teardown, already called
TestUpgradeControlPlane/ValidateHostedCluster
34m45.05s
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-kbk64/control-plane-upgrade-w5xws in 2m3s util.go:276: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-w5xws.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-w5xws.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-w5xws.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.206.232.229:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-w5xws.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.89.97.190:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-w5xws.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.89.79.99:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-w5xws.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.206.232.229:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-w5xws.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.89.97.190:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-w5xws.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.206.232.229:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-w5xws.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.89.97.190:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-w5xws.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.206.232.229:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-w5xws.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.89.97.190:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-w5xws.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.206.232.229:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-w5xws.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.89.79.99:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-w5xws.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.89.97.190:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-w5xws.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.206.232.229:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-w5xws.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.89.79.99:443: connect: connection refused util.go:330: Successfully waited for a successful connection to the guest API server in 2m42.025s util.go:513: Failed to wait for 2 nodes to become ready in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 2 nodes, got 0