PR #7590 - 02-24 10:52

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

157
Total Tests
96
Passed
41
Failed
20
Skipped

Failed Tests

TestAutoscaling
45m43.47s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-sfb2f/autoscaling-75psc in 29s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster autoscaling-75psc util.go:2974: Successfully waited for HostedCluster e2e-clusters-sfb2f/autoscaling-75psc to have valid conditions in 0s hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestAutoscaling/ValidateHostedCluster
29m51.36s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-sfb2f/autoscaling-75psc in 1m6s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-75psc.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-autoscaling-75psc.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-75psc.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.231.12.77:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-75psc.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.194.99.161:443: i/o timeout util.go:370: Successfully waited for a successful connection to the guest API server in 2m24.025s util.go:565: Successfully waited for 1 nodes to become ready in 10m15s eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-sfb2f/hostedclusters/autoscaling-75psc?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-sfb2f/hostedclusters/autoscaling-75psc?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-sfb2f/hostedclusters/autoscaling-75psc?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: the server was unable to return a response in the time allotted, but may still be processing the request (get hostedclusters.hypershift.openshift.io autoscaling-75psc) util.go:598: Successfully waited for HostedCluster e2e-clusters-sfb2f/autoscaling-75psc to rollout in 15m6s util.go:2974: Successfully waited for HostedCluster e2e-clusters-sfb2f/autoscaling-75psc to have valid conditions in 0s
TestAutoscaling/ValidateHostedCluster/EnsureGuestWebhooksValidated
1m0.01s
util.go:1870: failed to ensure guest webhooks validated, violating webhook test-webhook was not deleted: context deadline exceeded
TestAutoscaling/ValidateHostedCluster/EnsureNoCrashingPods
80ms
util.go:780: Container aws-ebs-csi-driver-operator in pod aws-ebs-csi-driver-operator-954547465-jrmb5 has a restartCount > 0 (5) util.go:780: Container manager in pod capi-provider-6566c78646-97486 has a restartCount > 0 (8) util.go:780: Container manager in pod cluster-api-8485fd97c6-rz99w has a restartCount > 0 (6) util.go:780: Container cluster-storage-operator in pod cluster-storage-operator-f74fcd98b-vkmbk has a restartCount > 0 (5) util.go:780: Container control-plane-operator in pod control-plane-operator-55c69f68d8-7578w has a restartCount > 0 (10) util.go:780: Container control-plane-pki-operator in pod control-plane-pki-operator-69469c76b8-89rhk has a restartCount > 0 (4) util.go:780: Container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-d55d97fc-pth7g has a restartCount > 0 (5) util.go:780: Container hosted-cluster-config-operator in pod hosted-cluster-config-operator-745c66cc56-vm4kg has a restartCount > 0 (6)
TestCreateCluster
48m27.3s
create_cluster_test.go:2624: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-52thc/create-cluster-kp7xr in 32s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster create-cluster-kp7xr util.go:2974: Failed to wait for HostedCluster e2e-clusters-52thc/create-cluster-kp7xr to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-52thc/create-cluster-kp7xr invalid at RV 126162 after 2s: incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorDegraded(Cluster operator storage is degraded) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestCreateCluster/ValidateHostedCluster
40m3.06s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-52thc/create-cluster-kp7xr in 1m6s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-kp7xr.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-create-cluster-kp7xr.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-kp7xr.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.228.249.117:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-kp7xr.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.49.249.94:443: i/o timeout util.go:370: Successfully waited for a successful connection to the guest API server in 2m30.025s util.go:565: Successfully waited for 3 nodes to become ready in 11m6s eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-52thc/hostedclusters/create-cluster-kp7xr?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-52thc/hostedclusters/create-cluster-kp7xr?timeout=5m0s": net/http: TLS handshake timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-52thc/hostedclusters/create-cluster-kp7xr?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-52thc/hostedclusters/create-cluster-kp7xr?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-52thc/hostedclusters/create-cluster-kp7xr?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-52thc/hostedclusters/create-cluster-kp7xr?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-52thc/hostedclusters/create-cluster-kp7xr?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-52thc/hostedclusters/create-cluster-kp7xr?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: the server was unable to return a response in the time allotted, but may still be processing the request (get hostedclusters.hypershift.openshift.io create-cluster-kp7xr) util.go:598: Successfully waited for HostedCluster e2e-clusters-52thc/create-cluster-kp7xr to rollout in 15m21s eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-52thc/hostedclusters/create-cluster-kp7xr?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-52thc/hostedclusters/create-cluster-kp7xr?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused util.go:2974: Failed to wait for HostedCluster e2e-clusters-52thc/create-cluster-kp7xr to have valid conditions in 10m0s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-52thc/create-cluster-kp7xr invalid at RV 126162 after 10m0s: incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorDegraded(Cluster operator storage is degraded)
TestCreateClusterCustomConfig
44m27.14s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-rgghl/custom-config-kb5xn in 12s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster custom-config-kb5xn util.go:2974: Successfully waited for HostedCluster e2e-clusters-rgghl/custom-config-kb5xn to have valid conditions in 25ms hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestCreateClusterCustomConfig/ValidateHostedCluster
32m41.58s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-rgghl/custom-config-kb5xn in 1m3s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-kb5xn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-custom-config-kb5xn.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-kb5xn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.228.201.9:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-kb5xn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.94.248.13:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-kb5xn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.228.201.9:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-kb5xn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.94.248.13:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-kb5xn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.228.201.9:443: connect: connection refused util.go:370: Successfully waited for a successful connection to the guest API server in 3m3.025s util.go:565: Successfully waited for 2 nodes to become ready in 10m15s eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-rgghl/hostedclusters/custom-config-kb5xn?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-rgghl/hostedclusters/custom-config-kb5xn?timeout=5m0s": net/http: TLS handshake timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-rgghl/hostedclusters/custom-config-kb5xn?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-rgghl/hostedclusters/custom-config-kb5xn?timeout=5m0s": net/http: TLS handshake timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-rgghl/hostedclusters/custom-config-kb5xn?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-rgghl/hostedclusters/custom-config-kb5xn?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-rgghl/hostedclusters/custom-config-kb5xn?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-rgghl/hostedclusters/custom-config-kb5xn?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-rgghl/hostedclusters/custom-config-kb5xn?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-rgghl/hostedclusters/custom-config-kb5xn?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-rgghl/hostedclusters/custom-config-kb5xn?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: the server was unable to return a response in the time allotted, but may still be processing the request (get hostedclusters.hypershift.openshift.io custom-config-kb5xn) util.go:598: Successfully waited for HostedCluster e2e-clusters-rgghl/custom-config-kb5xn to rollout in 15m51s util.go:2974: Successfully waited for HostedCluster e2e-clusters-rgghl/custom-config-kb5xn to have valid conditions in 2m24.025s
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureNoCrashingPods
250ms
util.go:780: Container aws-ebs-csi-driver-operator in pod aws-ebs-csi-driver-operator-66bf48d8-vsh2f has a restartCount > 0 (6) util.go:777: Leader election failure detected in container manager in pod capi-provider-6d55b4fb4c-hgjs9 util.go:780: Container manager in pod cluster-api-74cdc8869b-8rn6s has a restartCount > 0 (6) util.go:780: Container cluster-storage-operator in pod cluster-storage-operator-58fd7ddb6d-qpknl has a restartCount > 0 (6) util.go:780: Container control-plane-operator in pod control-plane-operator-69dcb79b88-8lswq has a restartCount > 0 (11) util.go:780: Container control-plane-pki-operator in pod control-plane-pki-operator-7d854c545d-xbr7p has a restartCount > 0 (4) util.go:780: Container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-58486f79b5-hdbnv has a restartCount > 0 (5) util.go:780: Container hosted-cluster-config-operator in pod hosted-cluster-config-operator-59c48866c4-j275d has a restartCount > 0 (7)
TestCreateClusterPrivate
45m30.84s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-8gcd6/private-vzfx8 in 19s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster private-vzfx8 util.go:2974: Successfully waited for HostedCluster e2e-clusters-8gcd6/private-vzfx8 to have valid conditions in 0s hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestCreateClusterPrivate/ValidateHostedCluster
33m18.25s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8gcd6/private-vzfx8 in 1m3s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:258: Failed to get **v1beta1.NodePool: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout eventually.go:258: Failed to get **v1beta1.NodePool: the server was unable to return a response in the time allotted, but may still be processing the request (get nodepools.hypershift.openshift.io) eventually.go:258: Failed to get **v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-8gcd6/nodepools?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused - error from a previous attempt: http2: client connection lost eventually.go:258: Failed to get **v1beta1.NodePool: the server was unable to return a response in the time allotted, but may still be processing the request (get nodepools.hypershift.openshift.io) eventually.go:258: Failed to get **v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-8gcd6/nodepools?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused - error from a previous attempt: read tcp 10.128.30.183:52708->52.2.147.43:6443: read: connection reset by peer util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-8gcd6/private-vzfx8 to have all of their desired nodes in 14m21s eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-8gcd6/hostedclusters/private-vzfx8?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-8gcd6/hostedclusters/private-vzfx8?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-8gcd6/hostedclusters/private-vzfx8?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-8gcd6/hostedclusters/private-vzfx8?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-8gcd6/hostedclusters/private-vzfx8?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: the server was unable to return a response in the time allotted, but may still be processing the request (get hostedclusters.hypershift.openshift.io private-vzfx8) util.go:598: Successfully waited for HostedCluster e2e-clusters-8gcd6/private-vzfx8 to rollout in 14m45s util.go:2974: Successfully waited for HostedCluster e2e-clusters-8gcd6/private-vzfx8 to have valid conditions in 3m9s
TestCreateClusterPrivate/ValidateHostedCluster/EnsureNoCrashingPods
80ms
util.go:780: Container aws-ebs-csi-driver-operator in pod aws-ebs-csi-driver-operator-d8c8cf695-mhfkm has a restartCount > 0 (5) util.go:780: Container manager in pod capi-provider-ff5d8f5c7-57mnn has a restartCount > 0 (9) util.go:780: Container manager in pod cluster-api-74c88b67f9-86ttj has a restartCount > 0 (6) util.go:780: Container cluster-storage-operator in pod cluster-storage-operator-5c9cd5c79-chcvb has a restartCount > 0 (5) util.go:780: Container control-plane-operator in pod control-plane-operator-77cf5c859f-7spkt has a restartCount > 0 (11) util.go:780: Container control-plane-pki-operator in pod control-plane-pki-operator-675d856b5f-95d97 has a restartCount > 0 (4) util.go:780: Container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-7fbd87bbdb-5m245 has a restartCount > 0 (6) util.go:780: Container hosted-cluster-config-operator in pod hosted-cluster-config-operator-56b98466fd-25mn9 has a restartCount > 0 (6)
TestCreateClusterPrivateWithRouteKAS
45m4.4s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-n44kj/private-znqjf in 33s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster private-znqjf util.go:2974: Successfully waited for HostedCluster e2e-clusters-n44kj/private-znqjf to have valid conditions in 0s hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestCreateClusterPrivateWithRouteKAS/ValidateHostedCluster
30m48.38s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-n44kj/private-znqjf in 1m12s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:258: Failed to get **v1beta1.NodePool: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout eventually.go:258: Failed to get **v1beta1.NodePool: the server was unable to return a response in the time allotted, but may still be processing the request (get nodepools.hypershift.openshift.io) eventually.go:258: Failed to get **v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-n44kj/nodepools?timeout=5m0s": stream error: stream ID 7; INTERNAL_ERROR; received from peer - error from a previous attempt: http2: client connection lost eventually.go:258: Failed to get **v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-n44kj/nodepools?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused - error from a previous attempt: read tcp 10.128.30.183:52708->52.2.147.43:6443: read: connection reset by peer util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-n44kj/private-znqjf to have all of their desired nodes in 13m42s eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-n44kj/hostedclusters/private-znqjf?timeout=5m0s": net/http: TLS handshake timeout - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-n44kj/hostedclusters/private-znqjf?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-n44kj/hostedclusters/private-znqjf?timeout=5m0s": net/http: TLS handshake timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-n44kj/hostedclusters/private-znqjf?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-n44kj/hostedclusters/private-znqjf?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-n44kj/hostedclusters/private-znqjf?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-n44kj/hostedclusters/private-znqjf?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-n44kj/hostedclusters/private-znqjf?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: the server was unable to return a response in the time allotted, but may still be processing the request (get hostedclusters.hypershift.openshift.io private-znqjf) util.go:598: Successfully waited for HostedCluster e2e-clusters-n44kj/private-znqjf to rollout in 15m0s util.go:2974: Successfully waited for HostedCluster e2e-clusters-n44kj/private-znqjf to have valid conditions in 54.025s
TestCreateClusterPrivateWithRouteKAS/ValidateHostedCluster/EnsureNoCrashingPods
120ms
util.go:780: Container aws-ebs-csi-driver-operator in pod aws-ebs-csi-driver-operator-59d9769cc-kf876 has a restartCount > 0 (5) util.go:780: Container manager in pod capi-provider-5cbb58588b-f2snf has a restartCount > 0 (9) util.go:780: Container manager in pod cluster-api-6984566d48-8zxmk has a restartCount > 0 (6) util.go:780: Container cluster-storage-operator in pod cluster-storage-operator-788d4f5df5-wd44z has a restartCount > 0 (5) util.go:780: Container control-plane-operator in pod control-plane-operator-679fd8b4c9-kwj85 has a restartCount > 0 (11) util.go:780: Container control-plane-pki-operator in pod control-plane-pki-operator-5b7f56df98-qrjp8 has a restartCount > 0 (4) util.go:780: Container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-9bd8c7474-kvb2t has a restartCount > 0 (5) util.go:780: Container hosted-cluster-config-operator in pod hosted-cluster-config-operator-566465dc65-f6tln has a restartCount > 0 (6)
TestCreateClusterProxy
44m5.58s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-bmvzz/proxy-8frwl in 26s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster proxy-8frwl util.go:2974: Successfully waited for HostedCluster e2e-clusters-bmvzz/proxy-8frwl to have valid conditions in 0s hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestCreateClusterProxy/ValidateHostedCluster
31m3.51s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-bmvzz/proxy-8frwl in 57s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-8frwl.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-proxy-8frwl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-8frwl.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.204.82:443: i/o timeout util.go:370: Successfully waited for a successful connection to the guest API server in 2m7.1s util.go:565: Successfully waited for 2 nodes to become ready in 11m21s eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-bmvzz/hostedclusters/proxy-8frwl?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-bmvzz/hostedclusters/proxy-8frwl?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-bmvzz/hostedclusters/proxy-8frwl?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-bmvzz/hostedclusters/proxy-8frwl?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-bmvzz/hostedclusters/proxy-8frwl?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-bmvzz/hostedclusters/proxy-8frwl?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-bmvzz/hostedclusters/proxy-8frwl?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-bmvzz/hostedclusters/proxy-8frwl?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-bmvzz/hostedclusters/proxy-8frwl?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-bmvzz/hostedclusters/proxy-8frwl?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-bmvzz/hostedclusters/proxy-8frwl?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-bmvzz/hostedclusters/proxy-8frwl?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: the server was unable to return a response in the time allotted, but may still be processing the request (get hostedclusters.hypershift.openshift.io proxy-8frwl) util.go:598: Successfully waited for HostedCluster e2e-clusters-bmvzz/proxy-8frwl to rollout in 15m24s util.go:2974: Successfully waited for HostedCluster e2e-clusters-bmvzz/proxy-8frwl to have valid conditions in 1m9.025s
TestCreateClusterProxy/ValidateHostedCluster/EnsureNoCrashingPods
110ms
util.go:780: Container aws-ebs-csi-driver-operator in pod aws-ebs-csi-driver-operator-66fc454db4-bkkr4 has a restartCount > 0 (6) util.go:780: Container manager in pod capi-provider-66cd67dc5c-srm8z has a restartCount > 0 (8) util.go:780: Container manager in pod cluster-api-55d9754765-8fnzw has a restartCount > 0 (6) util.go:780: Container cluster-storage-operator in pod cluster-storage-operator-7cc75b4984-w2sfz has a restartCount > 0 (5) util.go:780: Container control-plane-operator in pod control-plane-operator-77c5d87869-p89m7 has a restartCount > 0 (7) util.go:780: Container control-plane-pki-operator in pod control-plane-pki-operator-55d5f6455f-mprtp has a restartCount > 0 (4) util.go:780: Container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-8d4d96d65-swwj8 has a restartCount > 0 (6) util.go:780: Container hosted-cluster-config-operator in pod hosted-cluster-config-operator-c5749577c-6k762 has a restartCount > 0 (6)
TestCreateClusterRequestServingIsolation
15m33.12s
requestserving.go:105: Created request serving nodepool clusters/2c6b4ffc5e1801e6dcbf-mgmt-reqserving-4krdl requestserving.go:105: Created request serving nodepool clusters/2c6b4ffc5e1801e6dcbf-mgmt-reqserving-q6ltm requestserving.go:113: Created non request serving nodepool clusters/2c6b4ffc5e1801e6dcbf-mgmt-non-reqserving-2vf9d requestserving.go:113: Created non request serving nodepool clusters/2c6b4ffc5e1801e6dcbf-mgmt-non-reqserving-t4gvz requestserving.go:113: Created non request serving nodepool clusters/2c6b4ffc5e1801e6dcbf-mgmt-non-reqserving-f8hq6 util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/2c6b4ffc5e1801e6dcbf-mgmt-reqserving-4krdl in 3m48s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/2c6b4ffc5e1801e6dcbf-mgmt-reqserving-q6ltm in 48.025s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/2c6b4ffc5e1801e6dcbf-mgmt-non-reqserving-2vf9d in 100ms util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/2c6b4ffc5e1801e6dcbf-mgmt-non-reqserving-t4gvz in 3.025s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/2c6b4ffc5e1801e6dcbf-mgmt-non-reqserving-f8hq6 in 100ms create_cluster_test.go:2803: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:447: failed to create cluster, tearing down: failed to create IAM: failed to discover OIDC bucket configuration: failed to get the kube-public/oidc-storage-provider-s3-config configmap: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps oidc-storage-provider-s3-config) requestserving.go:132: Tearing down custom nodepool clusters/2c6b4ffc5e1801e6dcbf-mgmt-reqserving-4krdl requestserving.go:132: Tearing down custom nodepool clusters/2c6b4ffc5e1801e6dcbf-mgmt-reqserving-q6ltm requestserving.go:132: Tearing down custom nodepool clusters/2c6b4ffc5e1801e6dcbf-mgmt-non-reqserving-2vf9d requestserving.go:132: Tearing down custom nodepool clusters/2c6b4ffc5e1801e6dcbf-mgmt-non-reqserving-t4gvz requestserving.go:132: Tearing down custom nodepool clusters/2c6b4ffc5e1801e6dcbf-mgmt-non-reqserving-f8hq6
TestCreateClusterRequestServingIsolation/Teardown
2m43.51s
fixture.go:395: Failed saving machine console logs; this is nonfatal: failed to get machine console logs: failed to get hostedcluster: hostedclusters.hypershift.openshift.io "request-serving-isolation-msc94" not found fixture.go:403: Failed to dump machine journals; this is nonfatal: no SSH secret specified for cluster, cannot dump journals fixture.go:341: SUCCESS: found no remaining guest resources hypershift_framework.go:520: Destroyed cluster. Namespace: e2e-clusters-vbs8b, name: request-serving-isolation-msc94 hypershift_framework.go:534: Failed to delete test namespace: namespace still exists after deletion timeout: context deadline exceeded fixture.go:395: Failed saving machine console logs; this is nonfatal: failed to get machine console logs: failed to get hostedcluster: hostedclusters.hypershift.openshift.io "request-serving-isolation-msc94" not found fixture.go:403: Failed to dump machine journals; this is nonfatal: no SSH secret specified for cluster, cannot dump journals
TestNodePool
0s
TestNodePool/HostedCluster0
1h23m52.55s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-csxkr/node-pool-krqmk in 26s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster node-pool-krqmk util.go:2974: Failed to wait for HostedCluster e2e-clusters-csxkr/node-pool-krqmk to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-csxkr/node-pool-krqmk invalid at RV 174005 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=False, got ClusterVersionSucceeding=True: FromClusterVersion eventually.go:227: - incorrect condition: wanted DataPlaneConnectionAvailable=Unknown, got DataPlaneConnectionAvailable=True: AsExpected(All is well) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=False, got ClusterVersionAvailable=True: FromClusterVersion(Done applying 4.22.0-0.ci-2026-02-24-112320-test-ci-op-sywg36rm-latest) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=True, got ClusterVersionProgressing=False: FromClusterVersion(Cluster version is 4.22.0-0.ci-2026-02-24-112320-test-ci-op-sywg36rm-latest) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestNodePool/HostedCluster0/Main
100ms
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-csxkr/node-pool-krqmk in 50ms util.go:308: Successfully waited for kubeconfig secret to have data in 25ms util.go:370: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster0/Main/TestKMSRootVolumeEncryption
34m45.06s
nodepool_kms_root_volume_test.go:42: Starting test KMSRootVolumeTest nodepool_kms_root_volume_test.go:54: retrieved KMS ARN: arn:aws:kms:us-east-1:820196288204:key/d3cdd9e0-3fd1-47a4-a559-72ae3672c5a6 util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-kms-root-volume in 24m45s eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-csxkr/nodepools/node-pool-krqmk-test-kms-root-volume?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-csxkr/nodepools/node-pool-krqmk-test-kms-root-volume?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused nodepool_test.go:404: Failed to wait for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-kms-root-volume to have correct status in 10m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-csxkr/node-pool-krqmk-test-kms-root-volume invalid at RV 116500 after 10m0s: incorrect condition: wanted ValidGeneratedPayload=True, got ValidGeneratedPayload=Unknown(Unable to get status data from token secret)
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace
28m21.2s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-ntomachineconfig-inplace in 20m33s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-ntomachineconfig-inplace to have correct status in 2.8s util.go:481: Successfully waited for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-ntomachineconfig-inplace to start config update in 15.025s util.go:497: Successfully waited for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-ntomachineconfig-inplace to finish config update in 7m20s nodepool_machineconfig_test.go:165: Successfully waited for all pods in the DaemonSet kube-system/node-pool-krqmk-test-ntomachineconfig-inplace to be ready in 10s util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-ntomachineconfig-inplace in 0s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-ntomachineconfig-inplace to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace/EnsureNoCrashingPods
110ms
util.go:780: Container aws-ebs-csi-driver-operator in pod aws-ebs-csi-driver-operator-675c76b7b4-kb9wz has a restartCount > 0 (5) util.go:777: Leader election failure detected in container manager in pod capi-provider-66db448b69-rwlm5 util.go:780: Container manager in pod cluster-api-548b76dcd8-sk95h has a restartCount > 0 (6) util.go:780: Container cluster-storage-operator in pod cluster-storage-operator-7d54fbdfbf-rxvxq has a restartCount > 0 (5) util.go:777: Leader election failure detected in container control-plane-operator in pod control-plane-operator-6db68f4557-w9554 util.go:780: Container control-plane-pki-operator in pod control-plane-pki-operator-7959f99dcc-84cpr has a restartCount > 0 (4) util.go:780: Container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-59994dbbc4-pdvxv has a restartCount > 0 (5) util.go:777: Leader election failure detected in container hosted-cluster-config-operator in pod hosted-cluster-config-operator-864574ccb6-8tl2x
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut
19m33.88s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-ntomachineconfig-replace in 8m15s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-ntomachineconfig-replace to have correct status in 1.45s util.go:481: Successfully waited for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-ntomachineconfig-replace to start config update in 15.025s util.go:497: Successfully waited for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-ntomachineconfig-replace to finish config update in 20s nodepool_machineconfig_test.go:165: Successfully waited for all pods in the DaemonSet kube-system/node-pool-krqmk-test-ntomachineconfig-replace to be ready in 9m30s util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-ntomachineconfig-replace in 1m9s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-ntomachineconfig-replace to have correct status in 3s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut/EnsureNoCrashingPods
90ms
util.go:780: Container aws-ebs-csi-driver-operator in pod aws-ebs-csi-driver-operator-675c76b7b4-kb9wz has a restartCount > 0 (5) util.go:777: Leader election failure detected in container manager in pod capi-provider-66db448b69-rwlm5 util.go:780: Container manager in pod cluster-api-548b76dcd8-sk95h has a restartCount > 0 (6) util.go:780: Container cluster-storage-operator in pod cluster-storage-operator-7d54fbdfbf-rxvxq has a restartCount > 0 (5) util.go:777: Leader election failure detected in container control-plane-operator in pod control-plane-operator-6db68f4557-w9554 util.go:780: Container control-plane-pki-operator in pod control-plane-pki-operator-7959f99dcc-84cpr has a restartCount > 0 (4) util.go:780: Container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-59994dbbc4-pdvxv has a restartCount > 0 (5) util.go:777: Leader election failure detected in container hosted-cluster-config-operator in pod hosted-cluster-config-operator-864574ccb6-8tl2x
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout
20m12.79s
nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-machineconfig in 9m21s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-machineconfig to have correct status in 1.45s util.go:481: Successfully waited for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-machineconfig to start config update in 15.025s util.go:497: Successfully waited for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-machineconfig to finish config update in 20.025s nodepool_machineconfig_test.go:165: Successfully waited for all pods in the DaemonSet kube-system/machineconfig-update-checker-replace to be ready in 10m15s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-machineconfig in 0s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-machineconfig to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout/EnsureNoCrashingPods
80ms
util.go:780: Container aws-ebs-csi-driver-operator in pod aws-ebs-csi-driver-operator-675c76b7b4-kb9wz has a restartCount > 0 (5) util.go:777: Leader election failure detected in container manager in pod capi-provider-66db448b69-rwlm5 util.go:780: Container manager in pod cluster-api-548b76dcd8-sk95h has a restartCount > 0 (6) util.go:780: Container cluster-storage-operator in pod cluster-storage-operator-7d54fbdfbf-rxvxq has a restartCount > 0 (5) util.go:777: Leader election failure detected in container control-plane-operator in pod control-plane-operator-6db68f4557-w9554 util.go:780: Container control-plane-pki-operator in pod control-plane-pki-operator-7959f99dcc-84cpr has a restartCount > 0 (4) util.go:780: Container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-59994dbbc4-pdvxv has a restartCount > 0 (5) util.go:777: Leader election failure detected in container hosted-cluster-config-operator in pod hosted-cluster-config-operator-864574ccb6-8tl2x
TestNodePool/HostedCluster0/Main/TestRollingUpgrade
23m42.02s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-rolling-upgrade in 13m42s eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-csxkr/nodepools/node-pool-krqmk-test-rolling-upgrade?timeout=5m0s": net/http: TLS handshake timeout - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-csxkr/nodepools/node-pool-krqmk-test-rolling-upgrade?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-csxkr/nodepools/node-pool-krqmk-test-rolling-upgrade?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-csxkr/nodepools/node-pool-krqmk-test-rolling-upgrade?timeout=5m0s": net/http: TLS handshake timeout eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-csxkr/nodepools/node-pool-krqmk-test-rolling-upgrade?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-csxkr/nodepools/node-pool-krqmk-test-rolling-upgrade?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-csxkr/nodepools/node-pool-krqmk-test-rolling-upgrade?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-csxkr/nodepools/node-pool-krqmk-test-rolling-upgrade?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-csxkr/nodepools/node-pool-krqmk-test-rolling-upgrade?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-csxkr/nodepools/node-pool-krqmk-test-rolling-upgrade?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-csxkr/nodepools/node-pool-krqmk-test-rolling-upgrade?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.NodePool: the server was unable to return a response in the time allotted, but may still be processing the request (get nodepools.hypershift.openshift.io node-pool-krqmk-test-rolling-upgrade) eventually.go:104: Failed to get *v1beta1.NodePool: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline nodepool_test.go:404: Failed to wait for NodePool e2e-clusters-csxkr/node-pool-krqmk-test-rolling-upgrade to have correct status in 10m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-csxkr/node-pool-krqmk-test-rolling-upgrade invalid at RV 106842 after 10m0s: eventually.go:227: - incorrect condition: wanted UpdatingPlatformMachineTemplate=False, got UpdatingPlatformMachineTemplate=True: AsExpected(platform machine template update in progress. Target template: node-pool-krqmk-test-rolling-upgrade-9eb9d7e1) eventually.go:227: - incorrect condition: wanted AllNodesHealthy=True, got AllNodesHealthy=False: NodeConditionsFailed(Machine node-pool-krqmk-test-rolling-upgrade-r92jk-pzz5h: NodeConditionsFailed Machine node-pool-krqmk-test-rolling-upgrade-r92jk-zldv8: NodeConditionsFailed ) eventually.go:227: - incorrect condition: wanted Ready=True, got Ready=False: WaitingForAvailableMachines(Minimum availability requires 2 replicas, current 0 available) eventually.go:227: - incorrect condition: wanted UpdatingVersion=False, got UpdatingVersion=True: AsExpected(Updating version in progress. Target version: 4.22.0-0.ci-2026-02-24-112320-test-ci-op-sywg36rm-latest) eventually.go:227: - incorrect condition: wanted UpdatingConfig=False, got UpdatingConfig=True: AsExpected(Updating config in progress. Target config: 6da27122)
TestNodePool/HostedCluster2
49m52.31s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-42tkq/node-pool-576lg in 22s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster node-pool-576lg util.go:2974: Failed to wait for HostedCluster e2e-clusters-42tkq/node-pool-576lg to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-42tkq/node-pool-576lg invalid at RV 175124 after 2s: eventually.go:227: - incorrect condition: wanted DataPlaneConnectionAvailable=Unknown, got DataPlaneConnectionAvailable=True: AsExpected(All is well) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=False, got ClusterVersionAvailable=True: FromClusterVersion(Done applying 4.22.0-0.ci-2026-02-24-112320-test-ci-op-sywg36rm-latest) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=False, got ClusterVersionSucceeding=True: FromClusterVersion eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=True, got ClusterVersionProgressing=False: FromClusterVersion(Cluster version is 4.22.0-0.ci-2026-02-24-112320-test-ci-op-sywg36rm-latest) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestNodePool/HostedCluster2/Main
3.34s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-42tkq/node-pool-576lg in 900ms util.go:308: Successfully waited for kubeconfig secret to have data in 1.25s util.go:370: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation
36m30.08s
nodepool_additionalTrustBundlePropagation_test.go:40: Starting AdditionalTrustBundlePropagationTest. util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-42tkq/node-pool-576lg-test-additional-trust-bundle-propagation in 10m43.025s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-42tkq/node-pool-576lg-test-additional-trust-bundle-propagation to have correct status in 25ms eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-42tkq/nodepools/node-pool-576lg-test-additional-trust-bundle-propagation?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-42tkq/node-pool-576lg-test-additional-trust-bundle-propagation to have correct status in 5m36s
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation/AdditionalTrustBundlePropagationTest
20m10.09s
nodepool_additionalTrustBundlePropagation_test.go:74: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle nodepool_additionalTrustBundlePropagation_test.go:82: Successfully waited for Waiting for NodePool e2e-clusters-42tkq/node-pool-576lg-test-additional-trust-bundle-propagation to begin updating in 10.025s eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-42tkq/nodepools/node-pool-576lg-test-additional-trust-bundle-propagation?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-42tkq/nodepools/node-pool-576lg-test-additional-trust-bundle-propagation?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-42tkq/nodepools/node-pool-576lg-test-additional-trust-bundle-propagation?timeout=5m0s": net/http: TLS handshake timeout eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-42tkq/nodepools/node-pool-576lg-test-additional-trust-bundle-propagation?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-42tkq/nodepools/node-pool-576lg-test-additional-trust-bundle-propagation?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.NodePool: the server was unable to return a response in the time allotted, but may still be processing the request (get nodepools.hypershift.openshift.io node-pool-576lg-test-additional-trust-bundle-propagation) eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-42tkq/nodepools/node-pool-576lg-test-additional-trust-bundle-propagation?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.NodePool: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-42tkq/nodepools/node-pool-576lg-test-additional-trust-bundle-propagation?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused nodepool_additionalTrustBundlePropagation_test.go:96: Failed to wait for Waiting for NodePool e2e-clusters-42tkq/node-pool-576lg-test-additional-trust-bundle-propagation to stop updating in 20m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-42tkq/node-pool-576lg-test-additional-trust-bundle-propagation invalid at RV 121358 after 20m0s: incorrect condition: wanted AllNodesHealthy=True, got AllNodesHealthy=False: NodeProvisioning(Machine node-pool-576lg-test-additional-trust-bundle-propagation-q58dcr: NodeProvisioning ) nodepool_additionalTrustBundlePropagation_test.go:96: *v1beta1.NodePool e2e-clusters-42tkq/node-pool-576lg-test-additional-trust-bundle-propagation conditions: nodepool_additionalTrustBundlePropagation_test.go:96: AutoscalingEnabled=False: AsExpected nodepool_additionalTrustBundlePropagation_test.go:96: UpdateManagementEnabled=True: AsExpected nodepool_additionalTrustBundlePropagation_test.go:96: ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-sywg36rm/release@sha256:643a64a0185fa472f38e4c86ec45376ed35cb713b5aab3a6152c3a9130d8fc14) nodepool_additionalTrustBundlePropagation_test.go:96: ValidArchPlatform=True: AsExpected nodepool_additionalTrustBundlePropagation_test.go:96: ReconciliationActive=True: ReconciliationActive(Reconciliation active on resource) nodepool_additionalTrustBundlePropagation_test.go:96: SupportedVersionSkew=True: AsExpected(Release image version is valid) nodepool_additionalTrustBundlePropagation_test.go:96: ValidMachineConfig=True: AsExpected nodepool_additionalTrustBundlePropagation_test.go:96: UpdatingConfig=False: AsExpected nodepool_additionalTrustBundlePropagation_test.go:96: UpdatingVersion=False: AsExpected nodepool_additionalTrustBundlePropagation_test.go:96: ValidGeneratedPayload=True: AsExpected(Payload generated successfully) nodepool_additionalTrustBundlePropagation_test.go:96: ReachedIgnitionEndpoint=True: AsExpected nodepool_additionalTrustBundlePropagation_test.go:96: AllMachinesReady=True: AsExpected(All is well) nodepool_additionalTrustBundlePropagation_test.go:96: AllNodesHealthy=False: NodeProvisioning(Machine node-pool-576lg-test-additional-trust-bundle-propagation-q58dcr: NodeProvisioning ) nodepool_additionalTrustBundlePropagation_test.go:96: ValidPlatformConfig=True: AsExpected(All is well) nodepool_additionalTrustBundlePropagation_test.go:96: ValidPlatformImage=True: AsExpected(Bootstrap AMI is "ami-01095d1967818437c") nodepool_additionalTrustBundlePropagation_test.go:96: AWSSecurityGroupAvailable=True: AsExpected(NodePool has a default security group) nodepool_additionalTrustBundlePropagation_test.go:96: ValidTuningConfig=True: AsExpected nodepool_additionalTrustBundlePropagation_test.go:96: UpdatingPlatformMachineTemplate=False: AsExpected nodepool_additionalTrustBundlePropagation_test.go:96: AutorepairEnabled=False: AsExpected nodepool_additionalTrustBundlePropagation_test.go:96: Ready=True: AsExpected
TestNodePoolAutoscalingScaleFromZero
43m38.89s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-92tcr/scale-from-zero-8qdt8 in 19s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster scale-from-zero-8qdt8 util.go:2974: Successfully waited for HostedCluster e2e-clusters-92tcr/scale-from-zero-8qdt8 to have valid conditions in 0s hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestNodePoolAutoscalingScaleFromZero/ValidateHostedCluster
30m40.48s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-92tcr/scale-from-zero-8qdt8 in 54s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-scale-from-zero-8qdt8.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-scale-from-zero-8qdt8.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-scale-from-zero-8qdt8.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.46.144:443: i/o timeout util.go:370: Successfully waited for a successful connection to the guest API server in 2m8.125s util.go:565: Successfully waited for 1 nodes to become ready in 11m39s eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-92tcr/hostedclusters/scale-from-zero-8qdt8?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-92tcr/hostedclusters/scale-from-zero-8qdt8?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-92tcr/hostedclusters/scale-from-zero-8qdt8?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-92tcr/hostedclusters/scale-from-zero-8qdt8?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-92tcr/hostedclusters/scale-from-zero-8qdt8?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-92tcr/hostedclusters/scale-from-zero-8qdt8?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-92tcr/hostedclusters/scale-from-zero-8qdt8?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-92tcr/hostedclusters/scale-from-zero-8qdt8?timeout=5m0s": dial tcp 52.2.147.43:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-92tcr/hostedclusters/scale-from-zero-8qdt8?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: the server was unable to return a response in the time allotted, but may still be processing the request (get hostedclusters.hypershift.openshift.io scale-from-zero-8qdt8) util.go:598: Successfully waited for HostedCluster e2e-clusters-92tcr/scale-from-zero-8qdt8 to rollout in 14m21s util.go:2974: Successfully waited for HostedCluster e2e-clusters-92tcr/scale-from-zero-8qdt8 to have valid conditions in 1m33s
TestNodePoolAutoscalingScaleFromZero/ValidateHostedCluster/EnsureNoCrashingPods
70ms
util.go:780: Container aws-ebs-csi-driver-operator in pod aws-ebs-csi-driver-operator-b67d9b489-5wmnx has a restartCount > 0 (5) util.go:780: Container manager in pod capi-provider-7c67b77575-w4p7v has a restartCount > 0 (9) util.go:780: Container manager in pod cluster-api-7467ccdb5c-xlr2c has a restartCount > 0 (6) util.go:780: Container cluster-storage-operator in pod cluster-storage-operator-d6f9f4c85-6gdl9 has a restartCount > 0 (5) util.go:780: Container control-plane-operator in pod control-plane-operator-75b694cf8c-m94cq has a restartCount > 0 (11) util.go:780: Container control-plane-pki-operator in pod control-plane-pki-operator-68bb55c7cc-g8zcq has a restartCount > 0 (4) util.go:780: Container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-678b747676-grkm6 has a restartCount > 0 (5) util.go:780: Container hosted-cluster-config-operator in pod hosted-cluster-config-operator-76db97f8-5ddhc has a restartCount > 0 (7)
TestUpgradeControlPlane
54m29.28s
control_plane_upgrade_test.go:25: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-sywg36rm/release@sha256:8bacab7e1e3dac992c35519ca1f92c971b13b7d0477c895b0b95628cd818b043, toImage: registry.build01.ci.openshift.org/ci-op-sywg36rm/release@sha256:643a64a0185fa472f38e4c86ec45376ed35cb713b5aab3a6152c3a9130d8fc14 hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-f8jbb/control-plane-upgrade-5zf72 in 45s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster control-plane-upgrade-5zf72 util.go:2974: Successfully waited for HostedCluster e2e-clusters-f8jbb/control-plane-upgrade-5zf72 to have valid conditions in 0s hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestUpgradeControlPlane/ValidateHostedCluster
46m5.51s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-f8jbb/control-plane-upgrade-5zf72 in 1m21.025s util.go:308: Successfully waited for kubeconfig secret to have data in 25ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-5zf72.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-5zf72.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-5zf72.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.144.187.66:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-5zf72.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.144.187.66:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-5zf72.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.86.121:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-5zf72.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.191.38:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-5zf72.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.144.187.66:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-5zf72.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.191.38:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-5zf72.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.144.187.66:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-5zf72.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.191.38:443: connect: connection refused util.go:370: Successfully waited for a successful connection to the guest API server in 2m51.025s util.go:565: Successfully waited for 2 nodes to become ready in 27m48s eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-f8jbb/hostedclusters/control-plane-upgrade-5zf72?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a80dbb95897334d959afdb8a435afe53-cf3797a37e99bd7b.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-f8jbb/hostedclusters/control-plane-upgrade-5zf72?timeout=5m0s": dial tcp 52.2.147.43:6443: connect: connection refused util.go:598: Successfully waited for HostedCluster e2e-clusters-f8jbb/control-plane-upgrade-5zf72 to rollout in 11m45s util.go:2974: Successfully waited for HostedCluster e2e-clusters-f8jbb/control-plane-upgrade-5zf72 to have valid conditions in 2m15s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNoCrashingPods
130ms
util.go:777: Leader election failure detected in container aws-ebs-csi-driver-operator in pod aws-ebs-csi-driver-operator-8d9d5d56f-fzxc8 util.go:777: Leader election failure detected in container manager in pod capi-provider-7f9c85c899-qcnl7 util.go:780: Container manager in pod cluster-api-88d5b56d-9x64q has a restartCount > 0 (6) util.go:780: Container cluster-storage-operator in pod cluster-storage-operator-6587d959dd-hm8nt has a restartCount > 0 (5) util.go:777: Leader election failure detected in container control-plane-operator in pod control-plane-operator-6db7ff84f4-96x76 util.go:777: Leader election failure detected in container control-plane-pki-operator in pod control-plane-pki-operator-996b598bd-782lt util.go:780: Container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-6d655478b5-kbr2h has a restartCount > 0 (5) util.go:777: Leader election failure detected in container etcd-defrag in pod etcd-0 util.go:777: Leader election failure detected in container hosted-cluster-config-operator in pod hosted-cluster-config-operator-7876ffbc55-rmqwv