PR #7796 - 02-25 16:43

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

86
Total Tests
46
Passed
26
Failed
14
Skipped

Failed Tests

TestAutoscaling
53m13.4s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-sr4vv/autoscaling-zj4sf in 13s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster autoscaling-zj4sf util.go:2974: Failed to wait for HostedCluster e2e-clusters-sr4vv/autoscaling-zj4sf to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-sr4vv/autoscaling-zj4sf invalid at RV 87822 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available) eventually.go:227: - incorrect condition: wanted DataPlaneConnectionAvailable=True, got DataPlaneConnectionAvailable=Unknown: NoWorkerNodesAvailable(No worker nodes available) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(machine-approver deployment has 1 unavailable replicas) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.22.0-0.ci-2026-02-25-165958-test-ci-op-91p5wpdp-latest: some cluster operators are not available) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestAutoscaling/ValidateHostedCluster
47m51.13s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-sr4vv/autoscaling-zj4sf in 1m6s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-zj4sf.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-autoscaling-zj4sf.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-zj4sf.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.243.39.62:443: i/o timeout util.go:370: Successfully waited for a successful connection to the guest API server in 1m45.125s eventually.go:258: Failed to get **v1.Node: context deadline exceeded util.go:565: Failed to wait for 1 nodes to become ready in 45m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 45m0s eventually.go:400: - observed **v1.Node collection invalid: expected 1 nodes, got 0
TestCreateCluster
54m32.81s
create_cluster_test.go:2431: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-hr6kl/create-cluster-h8klq in 53s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster create-cluster-h8klq util.go:2974: Failed to wait for HostedCluster e2e-clusters-hr6kl/create-cluster-h8klq to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-hr6kl/create-cluster-h8klq invalid at RV 99231 after 2s: eventually.go:227: - incorrect condition: wanted Available=True, got Available=False: ComponentsNotAvailable(Waiting for components to be available: machine-approver) eventually.go:227: - incorrect condition: wanted DataPlaneConnectionAvailable=True, got DataPlaneConnectionAvailable=Unknown: NoWorkerNodesAvailable(No worker nodes available) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(machine-approver deployment has 1 unavailable replicas) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.22.0-0.ci-2026-02-25-165958-test-ci-op-91p5wpdp-latest: some cluster operators are not available) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestCreateCluster/ValidateHostedCluster
48m3.05s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-hr6kl/create-cluster-h8klq in 1m9s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-h8klq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-create-cluster-h8klq.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-h8klq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.23.74.125:443: i/o timeout util.go:370: Successfully waited for a successful connection to the guest API server in 1m54.025s eventually.go:258: Failed to get **v1.Node: context deadline exceeded util.go:565: Failed to wait for 3 nodes to become ready in 45m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 45m0s eventually.go:400: - observed **v1.Node collection invalid: expected 3 nodes, got 0
TestCreateClusterCustomConfig
54m0.16s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-mqgdj/custom-config-h2tzd in 19s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster custom-config-h2tzd util.go:2974: Failed to wait for HostedCluster e2e-clusters-mqgdj/custom-config-h2tzd to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-mqgdj/custom-config-h2tzd invalid at RV 99006 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators dns, kube-storage-version-migrator, monitoring, service-ca, storage are not available) eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(machine-approver deployment has 1 unavailable replicas) eventually.go:227: - incorrect condition: wanted Available=True, got Available=False: ComponentsNotAvailable(Waiting for components to be available: machine-approver) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted DataPlaneConnectionAvailable=True, got DataPlaneConnectionAvailable=Unknown: NoWorkerNodesAvailable(No worker nodes available) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.22.0-0.ci-2026-02-25-165958-test-ci-op-91p5wpdp-latest: some cluster operators are not available) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestCreateClusterCustomConfig/ValidateHostedCluster
48m30.27s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mqgdj/custom-config-h2tzd in 57s util.go:308: Successfully waited for kubeconfig secret to have data in 25ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-h2tzd.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-custom-config-h2tzd.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-h2tzd.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.0.69:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-h2tzd.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.0.69:443: connect: connection refused util.go:370: Successfully waited for a successful connection to the guest API server in 2m33.25s eventually.go:258: Failed to get **v1.Node: context deadline exceeded util.go:565: Failed to wait for 2 nodes to become ready in 45m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 45m0s eventually.go:400: - observed **v1.Node collection invalid: expected 2 nodes, got 0
TestCreateClusterPrivate
38m3.38s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-hbxx8/private-mch4r in 45s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster private-mch4r util.go:2974: Failed to wait for HostedCluster e2e-clusters-hbxx8/private-mch4r to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-hbxx8/private-mch4r invalid at RV 87668 after 2s: eventually.go:227: - incorrect condition: wanted Available=True, got Available=False: ComponentsNotAvailable(Waiting for components to be available: machine-approver) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted DataPlaneConnectionAvailable=True, got DataPlaneConnectionAvailable=Unknown: NoWorkerNodesAvailable(No worker nodes available) eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(machine-approver deployment has 1 unavailable replicas) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.22.0-0.ci-2026-02-25-165958-test-ci-op-91p5wpdp-latest: some cluster operators are not available) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestCreateClusterPrivate/ValidateHostedCluster
31m6.02s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-hbxx8/private-mch4r in 1m6s util.go:308: Successfully waited for kubeconfig secret to have data in 0s util.go:695: Failed to wait for NodePools for HostedCluster e2e-clusters-hbxx8/private-mch4r to have all of their desired nodes in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1beta1.NodePool state after 30m0s eventually.go:400: - observed **v1beta1.NodePool e2e-clusters-hbxx8/private-mch4r-us-east-1c invalid: expected 2 replicas, got 0 util.go:695: *v1beta1.NodePool e2e-clusters-hbxx8/private-mch4r-us-east-1c conditions: util.go:695: AutoscalingEnabled=False: AsExpected util.go:695: UpdateManagementEnabled=True: AsExpected util.go:695: ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-91p5wpdp/release@sha256:1e90f07e49d965f2828e04d30e37cb6ddfaddd0ff8da41cfa58d610dfaa24899) util.go:695: ValidArchPlatform=True: AsExpected util.go:695: ReconciliationActive=True: ReconciliationActive(Reconciliation active on resource) util.go:695: SupportedVersionSkew=True: AsExpected(Release image version is valid) util.go:695: ValidMachineConfig=True: AsExpected util.go:695: UpdatingConfig=True: AsExpected(Updating config in progress. Target config: 994e09a1) util.go:695: UpdatingVersion=True: AsExpected(Updating version in progress. Target version: 4.22.0-0.ci-2026-02-25-165958-test-ci-op-91p5wpdp-latest) util.go:695: ValidGeneratedPayload=True: AsExpected(Payload generated successfully) util.go:695: ReachedIgnitionEndpoint=True: AsExpected util.go:695: AllMachinesReady=True: AsExpected(All is well) util.go:695: AllNodesHealthy=False: NodeProvisioning(Machine private-mch4r-us-east-1c-nt2lw-kngvf: NodeProvisioning Machine private-mch4r-us-east-1c-nt2lw-rmvx8: NodeProvisioning ) util.go:695: ValidPlatformConfig=True: AsExpected(All is well) util.go:695: ValidPlatformImage=True: AsExpected(Bootstrap AMI is "ami-01095d1967818437c") util.go:695: AWSSecurityGroupAvailable=True: AsExpected(NodePool has a default security group) util.go:695: ValidTuningConfig=True: AsExpected util.go:695: UpdatingPlatformMachineTemplate=True: AsExpected(platform machine template update in progress. Target template: private-mch4r-us-east-1c-be8454eb) util.go:695: AutorepairEnabled=False: AsExpected util.go:695: Ready=False: WaitingForAvailableMachines(Minimum availability requires 2 replicas, current 0 available)
TestCreateClusterPrivateWithRouteKAS
37m28.87s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-kfb5x/private-dw2rt in 21s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster private-dw2rt util.go:2974: Failed to wait for HostedCluster e2e-clusters-kfb5x/private-dw2rt to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-kfb5x/private-dw2rt invalid at RV 53361 after 2s: eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(machine-approver deployment has 1 unavailable replicas) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.22.0-0.ci-2026-02-25-165958-test-ci-op-91p5wpdp-latest: some cluster operators are not available) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted DataPlaneConnectionAvailable=True, got DataPlaneConnectionAvailable=Unknown: NoWorkerNodesAvailable(No worker nodes available) eventually.go:227: - incorrect condition: wanted Available=True, got Available=False: ComponentsNotAvailable(Waiting for components to be available: machine-approver) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestCreateClusterPrivateWithRouteKAS/ValidateHostedCluster
30m57.01s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-kfb5x/private-dw2rt in 57s util.go:308: Successfully waited for kubeconfig secret to have data in 0s util.go:695: Failed to wait for NodePools for HostedCluster e2e-clusters-kfb5x/private-dw2rt to have all of their desired nodes in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1beta1.NodePool state after 30m0s eventually.go:400: - observed **v1beta1.NodePool e2e-clusters-kfb5x/private-dw2rt-us-east-1b invalid: expected 2 replicas, got 0 util.go:695: *v1beta1.NodePool e2e-clusters-kfb5x/private-dw2rt-us-east-1b conditions: util.go:695: AutoscalingEnabled=False: AsExpected util.go:695: UpdateManagementEnabled=True: AsExpected util.go:695: ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-91p5wpdp/release@sha256:1e90f07e49d965f2828e04d30e37cb6ddfaddd0ff8da41cfa58d610dfaa24899) util.go:695: ValidArchPlatform=True: AsExpected util.go:695: ReconciliationActive=True: ReconciliationActive(Reconciliation active on resource) util.go:695: SupportedVersionSkew=True: AsExpected(Release image version is valid) util.go:695: ValidMachineConfig=True: AsExpected util.go:695: UpdatingConfig=True: AsExpected(Updating config in progress. Target config: 296a4417) util.go:695: UpdatingVersion=True: AsExpected(Updating version in progress. Target version: 4.22.0-0.ci-2026-02-25-165958-test-ci-op-91p5wpdp-latest) util.go:695: ValidGeneratedPayload=True: AsExpected(Payload generated successfully) util.go:695: ReachedIgnitionEndpoint=True: AsExpected util.go:695: AllMachinesReady=True: AsExpected(All is well) util.go:695: AllNodesHealthy=False: NodeProvisioning(Machine private-dw2rt-us-east-1b-49m7d-bn6d5: NodeProvisioning Machine private-dw2rt-us-east-1b-49m7d-zssff: NodeProvisioning ) util.go:695: ValidPlatformConfig=True: AsExpected(All is well) util.go:695: ValidPlatformImage=True: AsExpected(Bootstrap AMI is "ami-01095d1967818437c") util.go:695: AWSSecurityGroupAvailable=True: AsExpected(NodePool has a default security group) util.go:695: ValidTuningConfig=True: AsExpected util.go:695: UpdatingPlatformMachineTemplate=True: AsExpected(platform machine template update in progress. Target template: private-dw2rt-us-east-1b-d23c0f4f) util.go:695: AutorepairEnabled=False: AsExpected util.go:695: Ready=False: WaitingForAvailableMachines(Minimum availability requires 2 replicas, current 0 available)
TestCreateClusterProxy
55m36.14s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-zkw26/proxy-sph47 in 23s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster proxy-sph47 util.go:2974: Failed to wait for HostedCluster e2e-clusters-zkw26/proxy-sph47 to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-zkw26/proxy-sph47 invalid at RV 87751 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available) eventually.go:227: - incorrect condition: wanted DataPlaneConnectionAvailable=True, got DataPlaneConnectionAvailable=Unknown: NoWorkerNodesAvailable(No worker nodes available) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.22.0-0.ci-2026-02-25-165958-test-ci-op-91p5wpdp-latest: some cluster operators are not available) eventually.go:227: - incorrect condition: wanted Available=True, got Available=False: ComponentsNotAvailable(Waiting for components to be available: machine-approver) eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(machine-approver deployment has 1 unavailable replicas) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestCreateClusterProxy/ValidateHostedCluster
48m14.03s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zkw26/proxy-sph47 in 57s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-sph47.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-proxy-sph47.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-sph47.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.51.8.99:443: i/o timeout util.go:370: Successfully waited for a successful connection to the guest API server in 2m17.025s util.go:565: Failed to wait for 2 nodes to become ready in 45m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 45m0s eventually.go:400: - observed **v1.Node collection invalid: expected 2 nodes, got 0
TestCreateClusterRequestServingIsolation
1h0m6.64s
requestserving.go:105: Created request serving nodepool clusters/e4d62e6a177681528a37-mgmt-reqserving-62ztl requestserving.go:105: Created request serving nodepool clusters/e4d62e6a177681528a37-mgmt-reqserving-gsvkh requestserving.go:113: Created non request serving nodepool clusters/e4d62e6a177681528a37-mgmt-non-reqserving-lrgr8 requestserving.go:113: Created non request serving nodepool clusters/e4d62e6a177681528a37-mgmt-non-reqserving-pdjjp requestserving.go:113: Created non request serving nodepool clusters/e4d62e6a177681528a37-mgmt-non-reqserving-zs248 util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/e4d62e6a177681528a37-mgmt-reqserving-62ztl in 3m48s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/e4d62e6a177681528a37-mgmt-reqserving-gsvkh in 30s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/e4d62e6a177681528a37-mgmt-non-reqserving-lrgr8 in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/e4d62e6a177681528a37-mgmt-non-reqserving-pdjjp in 45s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/e4d62e6a177681528a37-mgmt-non-reqserving-zs248 in 3s create_cluster_test.go:2610: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-pw7z6/request-serving-isolation-bwndr in 12s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster request-serving-isolation-bwndr util.go:2974: Failed to wait for HostedCluster e2e-clusters-pw7z6/request-serving-isolation-bwndr to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-pw7z6/request-serving-isolation-bwndr invalid at RV 87747 after 2s: eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(machine-approver deployment has 1 unavailable replicas) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.22.0-0.ci-2026-02-25-165958-test-ci-op-91p5wpdp-latest: some cluster operators are not available) eventually.go:227: - incorrect condition: wanted Available=True, got Available=False: ComponentsNotAvailable(Waiting for components to be available: machine-approver) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available) eventually.go:227: - incorrect condition: wanted DataPlaneConnectionAvailable=True, got DataPlaneConnectionAvailable=Unknown: NoWorkerNodesAvailable(No worker nodes available) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion hypershift_framework.go:278: skipping postTeardown() requestserving.go:132: Tearing down custom nodepool clusters/e4d62e6a177681528a37-mgmt-reqserving-62ztl requestserving.go:132: Tearing down custom nodepool clusters/e4d62e6a177681528a37-mgmt-reqserving-gsvkh requestserving.go:132: Tearing down custom nodepool clusters/e4d62e6a177681528a37-mgmt-non-reqserving-lrgr8 requestserving.go:132: Tearing down custom nodepool clusters/e4d62e6a177681528a37-mgmt-non-reqserving-pdjjp requestserving.go:132: Tearing down custom nodepool clusters/e4d62e6a177681528a37-mgmt-non-reqserving-zs248 hypershift_framework.go:256: skipping teardown, already called
TestCreateClusterRequestServingIsolation/ValidateHostedCluster
48m19.03s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pw7z6/request-serving-isolation-bwndr in 57s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-bwndr.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-request-serving-isolation-bwndr.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-bwndr.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.223.17.248:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-bwndr.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.54.130.71:443: i/o timeout util.go:370: Successfully waited for a successful connection to the guest API server in 2m22.025s eventually.go:258: Failed to get **v1.Node: context deadline exceeded util.go:565: Failed to wait for 3 nodes to become ready in 45m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 45m0s eventually.go:400: - observed **v1.Node collection invalid: expected 3 nodes, got 0
TestNodePool
0s
TestNodePool/HostedCluster0
28m20.23s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-jqxhm/node-pool-rr5gs in 26s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster node-pool-rr5gs util.go:2974: Failed to wait for HostedCluster e2e-clusters-jqxhm/node-pool-rr5gs to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-jqxhm/node-pool-rr5gs invalid at RV 76128 after 2s: incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(machine-approver deployment has 1 unavailable replicas) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestNodePool/HostedCluster0/ValidateHostedCluster
23m44.16s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-jqxhm/node-pool-rr5gs in 57s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-rr5gs.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-rr5gs.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-rr5gs.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.209.77.161:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-rr5gs.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.192.69.87:443: i/o timeout util.go:370: Successfully waited for a successful connection to the guest API server in 2m47.125s util.go:565: Successfully waited for 0 nodes to become ready in 25ms eventually.go:104: Failed to get *v1beta1.HostedCluster: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline util.go:2974: Failed to wait for HostedCluster e2e-clusters-jqxhm/node-pool-rr5gs to have valid conditions in 20m0s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-jqxhm/node-pool-rr5gs invalid at RV 76128 after 20m0s: incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(machine-approver deployment has 1 unavailable replicas)
TestNodePool/HostedCluster2
43m15.9s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-pxgzz/node-pool-7c65m in 24s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster node-pool-7c65m util.go:2974: Failed to wait for HostedCluster e2e-clusters-pxgzz/node-pool-7c65m to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-pxgzz/node-pool-7c65m invalid at RV 54015 after 2s: eventually.go:227: - incorrect condition: wanted Available=True, got Available=False: ComponentsNotAvailable(Waiting for components to be available: machine-approver) eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(machine-approver deployment has 1 unavailable replicas) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestNodePool/HostedCluster2/Teardown
19m7.69s
journals.go:208: No machines associated with infra id node-pool-7c65m were found. Skipping journal dump. fixture.go:321: Failed to wait for infra resources in guest cluster to be deleted: context deadline exceeded fixture.go:330: Failed to clean up 1 remaining resources for guest cluster fixture.go:337: Resource: arn:aws:s3:::node-pool-7c65m-image-registry-us-east-1-vswjgvosnrtuwygmkrnkx, tags: red-hat-clustertype=rosa,kubernetes.io/cluster/node-pool-7c65m=owned,red-hat-managed=true,Name=node-pool-7c65m-image-registry,expirationDate=2026-02-25T21:16+00:00, service: s3 hypershift_framework.go:520: Destroyed cluster. Namespace: e2e-clusters-pxgzz, name: node-pool-7c65m hypershift_framework.go:475: archiving /logs/artifacts/TestNodePool_HostedCluster2/hostedcluster-node-pool-7c65m to /logs/artifacts/TestNodePool_HostedCluster2/hostedcluster.tar.gz
TestNodePool/HostedCluster2/ValidateHostedCluster
23m42.05s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pxgzz/node-pool-7c65m in 1m15s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-7c65m.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-7c65m.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-7c65m.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.58.120:443: i/o timeout util.go:370: Successfully waited for a successful connection to the guest API server in 2m27.025s util.go:565: Successfully waited for 0 nodes to become ready in 0s eventually.go:104: Failed to get *v1beta1.HostedCluster: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline util.go:2974: Failed to wait for HostedCluster e2e-clusters-pxgzz/node-pool-7c65m to have valid conditions in 20m0s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-pxgzz/node-pool-7c65m invalid at RV 54015 after 20m0s: eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(machine-approver deployment has 1 unavailable replicas) eventually.go:227: - incorrect condition: wanted Available=True, got Available=False: ComponentsNotAvailable(Waiting for components to be available: machine-approver)
TestNodePoolAutoscalingScaleFromZero
54m31.4s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-2882c/scale-from-zero-7qzwp in 11s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster scale-from-zero-7qzwp util.go:2974: Failed to wait for HostedCluster e2e-clusters-2882c/scale-from-zero-7qzwp to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-2882c/scale-from-zero-7qzwp invalid at RV 91242 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.22.0-0.ci-2026-02-25-165958-test-ci-op-91p5wpdp-latest: some cluster operators are not available) eventually.go:227: - incorrect condition: wanted DataPlaneConnectionAvailable=True, got DataPlaneConnectionAvailable=Unknown: NoWorkerNodesAvailable(No worker nodes available) eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(machine-approver deployment has 1 unavailable replicas) eventually.go:227: - incorrect condition: wanted Available=True, got Available=False: ComponentsNotAvailable(Waiting for components to be available: machine-approver) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestNodePoolAutoscalingScaleFromZero/ValidateHostedCluster
48m6.05s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2882c/scale-from-zero-7qzwp in 1m0s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-scale-from-zero-7qzwp.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-scale-from-zero-7qzwp.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-scale-from-zero-7qzwp.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.233.29:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-scale-from-zero-7qzwp.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.95.118.137:443: i/o timeout util.go:370: Successfully waited for a successful connection to the guest API server in 2m6.025s util.go:565: Failed to wait for 1 nodes to become ready in 45m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 45m0s eventually.go:400: - observed **v1.Node collection invalid: expected 1 nodes, got 0
TestUpgradeControlPlane
56m20.59s
control_plane_upgrade_test.go:25: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-91p5wpdp/release@sha256:8bacab7e1e3dac992c35519ca1f92c971b13b7d0477c895b0b95628cd818b043, toImage: registry.build01.ci.openshift.org/ci-op-91p5wpdp/release@sha256:1e90f07e49d965f2828e04d30e37cb6ddfaddd0ff8da41cfa58d610dfaa24899 hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-9vq4t/control-plane-upgrade-nrkz2 in 21s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster control-plane-upgrade-nrkz2 util.go:2974: Failed to wait for HostedCluster e2e-clusters-9vq4t/control-plane-upgrade-nrkz2 to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-9vq4t/control-plane-upgrade-nrkz2 invalid at RV 109580 after 2s: incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(machine-approver deployment has 1 unavailable replicas) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestUpgradeControlPlane/Main
30m1.02s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-9vq4t/control-plane-upgrade-nrkz2 in 0s util.go:308: Successfully waited for kubeconfig secret to have data in 0s util.go:370: Successfully waited for a successful connection to the guest API server in 0s control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-91p5wpdp/release@sha256:1e90f07e49d965f2828e04d30e37cb6ddfaddd0ff8da41cfa58d610dfaa24899 util.go:598: Successfully waited for HostedCluster e2e-clusters-9vq4t/control-plane-upgrade-nrkz2 to rollout in 0s
TestUpgradeControlPlane/Main/EnsureNoCrashingPods
40ms
util.go:780: Container machine-approver in pod machine-approver-64f56d74f-2xkm5 has a restartCount > 0 (9)
TestUpgradeControlPlane/Main/Wait_for_control_plane_components_to_complete_rollout
30m0s
eventually.go:258: Failed to get **v1beta1.ControlPlaneComponent: client rate limiter Wait returned an error: context deadline exceeded util.go:638: Failed to wait for control plane components to complete rollout in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1beta1.ControlPlaneComponent state after 30m0s eventually.go:400: - observed **v1beta1.ControlPlaneComponent e2e-clusters-9vq4t-control-plane-upgrade-nrkz2/machine-approver invalid: eventually.go:403: - incorrect condition: wanted RolloutComplete=True, got RolloutComplete=False: WaitingForRolloutComplete(Waiting for deployment machine-approver rollout to finish: 1 out of 1 new replicas have been updated) eventually.go:403: - component machine-approver is still on version 4.22.0-0.ci-2026-02-21-140811 util.go:638: *v1beta1.ControlPlaneComponent e2e-clusters-9vq4t-control-plane-upgrade-nrkz2/machine-approver conditions: util.go:638: Available=True: AsExpected(Deployment machine-approver is available) util.go:638: RolloutComplete=False: WaitingForRolloutComplete(Waiting for deployment machine-approver rollout to finish: 1 out of 1 new replicas have been updated)