PR #7765 - 02-24 11:00

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

172
Total Tests
119
Passed
33
Failed
20
Skipped

Failed Tests

TestAutoscaling
45m6.07s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-t99c5/autoscaling-h8rb8 in 1m7s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster autoscaling-h8rb8 util.go:2974: Successfully waited for HostedCluster e2e-clusters-t99c5/autoscaling-h8rb8 to have valid conditions in 0s hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestAutoscaling/ValidateHostedCluster
30m29.45s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-t99c5/autoscaling-h8rb8 in 48.025s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-h8rb8.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-autoscaling-h8rb8.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-h8rb8.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.228.55.35:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-h8rb8.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.120.76:443: i/o timeout util.go:370: Successfully waited for a successful connection to the guest API server in 2m15.025s util.go:565: Successfully waited for 1 nodes to become ready in 8m30s eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-t99c5/hostedclusters/autoscaling-h8rb8?timeout=5m0s": net/http: TLS handshake timeout - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-t99c5/hostedclusters/autoscaling-h8rb8?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-t99c5/hostedclusters/autoscaling-h8rb8?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-t99c5/hostedclusters/autoscaling-h8rb8?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-t99c5/hostedclusters/autoscaling-h8rb8?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-t99c5/hostedclusters/autoscaling-h8rb8?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-t99c5/hostedclusters/autoscaling-h8rb8?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-t99c5/hostedclusters/autoscaling-h8rb8?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: the server was unable to return a response in the time allotted, but may still be processing the request (get hostedclusters.hypershift.openshift.io autoscaling-h8rb8) util.go:598: Successfully waited for HostedCluster e2e-clusters-t99c5/autoscaling-h8rb8 to rollout in 18m51s util.go:2974: Successfully waited for HostedCluster e2e-clusters-t99c5/autoscaling-h8rb8 to have valid conditions in 0s
TestAutoscaling/ValidateHostedCluster/EnsureNoCrashingPods
80ms
util.go:780: Container aws-ebs-csi-driver-operator in pod aws-ebs-csi-driver-operator-86d56b688c-z2tnk has a restartCount > 0 (6) util.go:780: Container manager in pod capi-provider-5cb9b9cf9b-294gc has a restartCount > 0 (8) util.go:780: Container manager in pod cluster-api-5498c856bc-h9p27 has a restartCount > 0 (8) util.go:780: Container cluster-storage-operator in pod cluster-storage-operator-55459bc799-94fh5 has a restartCount > 0 (6) util.go:780: Container control-plane-operator in pod control-plane-operator-6fb5dfc756-pwf6t has a restartCount > 0 (7) util.go:780: Container control-plane-pki-operator in pod control-plane-pki-operator-6f876cf7d6-gxvsl has a restartCount > 0 (4) util.go:780: Container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-656d99745d-qx228 has a restartCount > 0 (6)
TestCreateCluster
51m56.45s
create_cluster_test.go:2624: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-4qhhf/create-cluster-c9wk8 in 4m50s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster create-cluster-c9wk8 util.go:2974: Successfully waited for HostedCluster e2e-clusters-4qhhf/create-cluster-c9wk8 to have valid conditions in 0s hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestCreateCluster/ValidateHostedCluster
34m51.37s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-4qhhf/create-cluster-c9wk8 in 1m6s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-c9wk8.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-create-cluster-c9wk8.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-c9wk8.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.71.82.64:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-c9wk8.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.49.54.181:443: i/o timeout util.go:370: Successfully waited for a successful connection to the guest API server in 2m22.025s util.go:565: Successfully waited for 3 nodes to become ready in 22m24s util.go:598: Successfully waited for HostedCluster e2e-clusters-4qhhf/create-cluster-c9wk8 to rollout in 8m54.025s util.go:2974: Successfully waited for HostedCluster e2e-clusters-4qhhf/create-cluster-c9wk8 to have valid conditions in 0s
TestCreateCluster/ValidateHostedCluster/EnsureNoCrashingPods
70ms
util.go:780: Container manager in pod capi-provider-f4cb86d6d-7hmnv has a restartCount > 0 (4) util.go:777: Leader election failure detected in container control-plane-operator in pod control-plane-operator-79d4c84969-qx49w util.go:777: Leader election failure detected in container hosted-cluster-config-operator in pod hosted-cluster-config-operator-6cd484cfd8-qk5cz
TestCreateClusterCustomConfig
45m6.02s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-2668d/custom-config-nsfqj in 41s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster custom-config-nsfqj util.go:2974: Successfully waited for HostedCluster e2e-clusters-2668d/custom-config-nsfqj to have valid conditions in 0s hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestCreateClusterCustomConfig/ValidateHostedCluster
36m39.29s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2668d/custom-config-nsfqj in 1m3s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-nsfqj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-custom-config-nsfqj.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-nsfqj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.173.123.11:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-nsfqj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.203.253.204:443: i/o timeout util.go:370: Successfully waited for a successful connection to the guest API server in 2m48.125s util.go:565: Successfully waited for 2 nodes to become ready in 8m48s eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-2668d/hostedclusters/custom-config-nsfqj?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-2668d/hostedclusters/custom-config-nsfqj?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-2668d/hostedclusters/custom-config-nsfqj?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-2668d/hostedclusters/custom-config-nsfqj?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-2668d/hostedclusters/custom-config-nsfqj?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-2668d/hostedclusters/custom-config-nsfqj?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-2668d/hostedclusters/custom-config-nsfqj?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: the server was unable to return a response in the time allotted, but may still be processing the request (get hostedclusters.hypershift.openshift.io custom-config-nsfqj) util.go:598: Successfully waited for HostedCluster e2e-clusters-2668d/custom-config-nsfqj to rollout in 19m10.5s util.go:2974: Successfully waited for HostedCluster e2e-clusters-2668d/custom-config-nsfqj to have valid conditions in 125ms
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureGuestWebhooksValidated
1m0.03s
util.go:1870: failed to ensure guest webhooks validated, violating webhook test-webhook was not deleted: context deadline exceeded
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureNoCrashingPods
1m49.16s
util.go:796: couldn't stream pod log; pod namespace: e2e-clusters-2668d-custom-config-nsfqj, pod name: aws-ebs-csi-driver-operator-76644df779-ptr8j, error: Get "https://10.0.33.111:10250/containerLogs/e2e-clusters-2668d-custom-config-nsfqj/aws-ebs-csi-driver-operator-76644df779-ptr8j/aws-ebs-csi-driver-operator?previous=true&tailLines=10": http2: client connection lost util.go:780: Container aws-ebs-csi-driver-operator in pod aws-ebs-csi-driver-operator-76644df779-ptr8j has a restartCount > 0 (6) util.go:777: Leader election failure detected in container manager in pod capi-provider-6ff6ddcd55-dtxcx util.go:780: Container manager in pod cluster-api-6c5ddbf5-rtmfv has a restartCount > 0 (6) util.go:780: Container cluster-storage-operator in pod cluster-storage-operator-84b7c85c69-ms99m has a restartCount > 0 (5) util.go:777: Leader election failure detected in container control-plane-operator in pod control-plane-operator-5ddd45f95c-jtft8 util.go:780: Container control-plane-pki-operator in pod control-plane-pki-operator-67497df95d-mhzw4 has a restartCount > 0 (4) util.go:780: Container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-854bc946cb-qgxfg has a restartCount > 0 (5) util.go:777: Leader election failure detected in container hosted-cluster-config-operator in pod hosted-cluster-config-operator-79bf649589-6k8ss
TestCreateClusterPrivate
44m8.13s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-dgdds/private-26k9x in 30s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster private-26k9x util.go:2974: Failed to wait for HostedCluster e2e-clusters-dgdds/private-26k9x to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-dgdds/private-26k9x invalid at RV 60514 after 2s: incorrect condition: wanted AWSEndpointAvailable=True, got AWSEndpointAvailable=False: AWSError(Throttling; Throttling) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestCreateClusterPrivate/ValidateHostedCluster
27m21.12s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dgdds/private-26k9x in 1m3s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:258: Failed to get **v1beta1.NodePool: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-dgdds/nodepools?timeout=5m0s": stream error: stream ID 3; INTERNAL_ERROR; received from peer - error from a previous attempt: http2: client connection lost eventually.go:258: Failed to get **v1beta1.NodePool: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-dgdds/nodepools?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused - error from a previous attempt: read tcp 10.130.233.51:58234->18.210.239.191:6443: read: connection reset by peer eventually.go:258: Failed to get **v1beta1.NodePool: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-dgdds/nodepools?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-dgdds/private-26k9x to have all of their desired nodes in 12m51s util.go:598: Successfully waited for HostedCluster e2e-clusters-dgdds/private-26k9x to rollout in 3m27s eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-dgdds/hostedclusters/private-26k9x?timeout=5m0s": net/http: TLS handshake timeout - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-dgdds/hostedclusters/private-26k9x?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-dgdds/hostedclusters/private-26k9x?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-dgdds/hostedclusters/private-26k9x?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-dgdds/hostedclusters/private-26k9x?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-dgdds/hostedclusters/private-26k9x?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-dgdds/hostedclusters/private-26k9x?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-dgdds/hostedclusters/private-26k9x?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-dgdds/hostedclusters/private-26k9x?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: the server was unable to return a response in the time allotted, but may still be processing the request (get hostedclusters.hypershift.openshift.io private-26k9x) util.go:2974: Failed to wait for HostedCluster e2e-clusters-dgdds/private-26k9x to have valid conditions in 10m0s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-dgdds/private-26k9x invalid at RV 60514 after 10m0s: incorrect condition: wanted AWSEndpointAvailable=True, got AWSEndpointAvailable=False: AWSError(Throttling; Throttling)
TestCreateClusterPrivateWithRouteKAS
28m31.12s
hypershift_framework.go:447: failed to create cluster, tearing down: failed to create IAM: failed to discover OIDC bucket configuration: failed to get the kube-public/oidc-storage-provider-s3-config configmap: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps oidc-storage-provider-s3-config) fixture.go:395: Failed saving machine console logs; this is nonfatal: failed to get machine console logs: failed to get hostedcluster: hostedclusters.hypershift.openshift.io "private-r9vbb" not found fixture.go:403: Failed to dump machine journals; this is nonfatal: no SSH secret specified for cluster, cannot dump journals fixture.go:341: SUCCESS: found no remaining guest resources hypershift_framework.go:520: Destroyed cluster. Namespace: e2e-clusters-7pgqt, name: private-r9vbb
TestCreateClusterProxy
1h21m24.96s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-57s9h/proxy-89jn7 in 1m2s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster proxy-89jn7 util.go:2974: Successfully waited for HostedCluster e2e-clusters-57s9h/proxy-89jn7 to have valid conditions in 0s hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestCreateClusterProxy/ValidateHostedCluster
29m24.49s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-57s9h/proxy-89jn7 in 51s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-89jn7.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-proxy-89jn7.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-89jn7.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.220.26.44:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-89jn7.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.210.36.61:443: i/o timeout util.go:370: Successfully waited for a successful connection to the guest API server in 2m28.15s util.go:565: Successfully waited for 2 nodes to become ready in 9m9s eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-57s9h/hostedclusters/proxy-89jn7?timeout=5m0s": net/http: TLS handshake timeout - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-57s9h/hostedclusters/proxy-89jn7?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-57s9h/hostedclusters/proxy-89jn7?timeout=5m0s": net/http: TLS handshake timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-57s9h/hostedclusters/proxy-89jn7?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-57s9h/hostedclusters/proxy-89jn7?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-57s9h/hostedclusters/proxy-89jn7?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-57s9h/hostedclusters/proxy-89jn7?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-57s9h/hostedclusters/proxy-89jn7?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-57s9h/hostedclusters/proxy-89jn7?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-57s9h/hostedclusters/proxy-89jn7?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: the server was unable to return a response in the time allotted, but may still be processing the request (get hostedclusters.hypershift.openshift.io proxy-89jn7) util.go:598: Successfully waited for HostedCluster e2e-clusters-57s9h/proxy-89jn7 to rollout in 15m6.025s util.go:2974: Successfully waited for HostedCluster e2e-clusters-57s9h/proxy-89jn7 to have valid conditions in 1m45s
TestCreateClusterProxy/ValidateHostedCluster/EnsureNoCrashingPods
100ms
util.go:780: Container aws-ebs-csi-driver-operator in pod aws-ebs-csi-driver-operator-7bfdcfbcd9-x669r has a restartCount > 0 (6) util.go:780: Container manager in pod capi-provider-575cb5c6d9-4x5j6 has a restartCount > 0 (7) util.go:780: Container manager in pod cluster-api-fdbb7b459-gw5dp has a restartCount > 0 (6) util.go:780: Container cluster-storage-operator in pod cluster-storage-operator-67855f84c-8dhrn has a restartCount > 0 (5) util.go:780: Container control-plane-operator in pod control-plane-operator-d8c577475-4v46b has a restartCount > 0 (7) util.go:780: Container control-plane-pki-operator in pod control-plane-pki-operator-f9f5875fd-5j6rg has a restartCount > 0 (4) util.go:780: Container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-85cffccd-t85k6 has a restartCount > 0 (6)
TestCreateClusterRequestServingIsolation
28m32.43s
requestserving.go:105: Created request serving nodepool clusters/5cf40f658fd9e880b319-mgmt-reqserving-9hkgs requestserving.go:105: Created request serving nodepool clusters/5cf40f658fd9e880b319-mgmt-reqserving-jlwjl requestserving.go:113: Created non request serving nodepool clusters/5cf40f658fd9e880b319-mgmt-non-reqserving-44pfh requestserving.go:113: Created non request serving nodepool clusters/5cf40f658fd9e880b319-mgmt-non-reqserving-v4m6f requestserving.go:113: Created non request serving nodepool clusters/5cf40f658fd9e880b319-mgmt-non-reqserving-7kfl4 util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/5cf40f658fd9e880b319-mgmt-reqserving-9hkgs in 4m36s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/5cf40f658fd9e880b319-mgmt-reqserving-jlwjl in 100ms util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/5cf40f658fd9e880b319-mgmt-non-reqserving-44pfh in 3.025s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/5cf40f658fd9e880b319-mgmt-non-reqserving-v4m6f in 100ms eventually.go:258: Failed to get **v1.Node: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/api/v1/nodes?labelSelector=hypershift.openshift.io%2FnodePool%3D5cf40f658fd9e880b319-mgmt-non-reqserving-7kfl4&timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused - error from a previous attempt: http2: client connection lost eventually.go:258: Failed to get **v1.Node: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/api/v1/nodes?labelSelector=hypershift.openshift.io%2FnodePool%3D5cf40f658fd9e880b319-mgmt-non-reqserving-7kfl4&timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:258: Failed to get **v1.Node: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/api/v1/nodes?labelSelector=hypershift.openshift.io%2FnodePool%3D5cf40f658fd9e880b319-mgmt-non-reqserving-7kfl4&timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused - error from a previous attempt: read tcp 10.130.233.51:58234->18.210.239.191:6443: read: connection reset by peer eventually.go:258: Failed to get **v1.Node: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/api/v1/nodes?labelSelector=hypershift.openshift.io%2FnodePool%3D5cf40f658fd9e880b319-mgmt-non-reqserving-7kfl4&timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:258: Failed to get **v1.Node: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/api/v1/nodes?labelSelector=hypershift.openshift.io%2FnodePool%3D5cf40f658fd9e880b319-mgmt-non-reqserving-7kfl4&timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:258: Failed to get **v1.Node: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/api/v1/nodes?labelSelector=hypershift.openshift.io%2FnodePool%3D5cf40f658fd9e880b319-mgmt-non-reqserving-7kfl4&timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/5cf40f658fd9e880b319-mgmt-non-reqserving-7kfl4 in 10m36s create_cluster_test.go:2803: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:447: failed to create cluster, tearing down: failed to create IAM: failed to discover OIDC bucket configuration: failed to get the kube-public/oidc-storage-provider-s3-config configmap: failed to get API group resources: unable to retrieve the complete list of server APIs: v1: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/api/v1": dial tcp 18.210.239.191:6443: connect: connection refused requestserving.go:132: Tearing down custom nodepool clusters/5cf40f658fd9e880b319-mgmt-reqserving-9hkgs requestserving.go:132: Tearing down custom nodepool clusters/5cf40f658fd9e880b319-mgmt-reqserving-jlwjl requestserving.go:132: Tearing down custom nodepool clusters/5cf40f658fd9e880b319-mgmt-non-reqserving-44pfh requestserving.go:132: Tearing down custom nodepool clusters/5cf40f658fd9e880b319-mgmt-non-reqserving-v4m6f requestserving.go:132: Tearing down custom nodepool clusters/5cf40f658fd9e880b319-mgmt-non-reqserving-7kfl4 fixture.go:395: Failed saving machine console logs; this is nonfatal: failed to get machine console logs: failed to get hostedcluster: the server was unable to return a response in the time allotted, but may still be processing the request (get hostedclusters.hypershift.openshift.io request-serving-isolation-444zk) fixture.go:403: Failed to dump machine journals; this is nonfatal: no SSH secret specified for cluster, cannot dump journals fixture.go:341: SUCCESS: found no remaining guest resources hypershift_framework.go:520: Destroyed cluster. Namespace: e2e-clusters-hq4fc, name: request-serving-isolation-444zk
TestNodePool
0s
TestNodePool/HostedCluster0
1h42m43.35s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-bhwj5/node-pool-tf87z in 25s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster node-pool-tf87z util.go:2974: Failed to wait for HostedCluster e2e-clusters-bhwj5/node-pool-tf87z to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-bhwj5/node-pool-tf87z invalid at RV 97529 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=True, got ClusterVersionProgressing=False: FromClusterVersion(Cluster version is 4.22.0-0.ci-2026-02-24-112317-test-ci-op-3sg259px-latest) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=False, got ClusterVersionSucceeding=True: FromClusterVersion eventually.go:227: - incorrect condition: wanted DataPlaneConnectionAvailable=Unknown, got DataPlaneConnectionAvailable=True: AsExpected(All is well) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=False, got ClusterVersionAvailable=True: FromClusterVersion(Done applying 4.22.0-0.ci-2026-02-24-112317-test-ci-op-3sg259px-latest) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestNodePool/HostedCluster0/Main
20ms
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-bhwj5/node-pool-tf87z in 0s util.go:308: Successfully waited for kubeconfig secret to have data in 0s util.go:370: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace
28m23.22s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-ntomachineconfig-inplace in 18m12s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-ntomachineconfig-inplace to have correct status in 0s util.go:481: Successfully waited for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-ntomachineconfig-inplace to start config update in 16s util.go:497: Successfully waited for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-ntomachineconfig-inplace to finish config update in 9m40s nodepool_machineconfig_test.go:165: Successfully waited for all pods in the DaemonSet kube-system/node-pool-tf87z-test-ntomachineconfig-inplace to be ready in 15s util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-ntomachineconfig-inplace in 0s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-ntomachineconfig-inplace to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace/EnsureNoCrashingPods
90ms
util.go:780: Container aws-ebs-csi-driver-operator in pod aws-ebs-csi-driver-operator-5fbd7467d9-zkrm9 has a restartCount > 0 (6) util.go:777: Leader election failure detected in container manager in pod capi-provider-7759d869f8-dpf4x util.go:780: Container manager in pod cluster-api-5865484479-6p7tn has a restartCount > 0 (8) util.go:780: Container cluster-storage-operator in pod cluster-storage-operator-6bb677db98-lspfm has a restartCount > 0 (6) util.go:777: Leader election failure detected in container control-plane-operator in pod control-plane-operator-689788999f-pqjfn util.go:780: Container control-plane-pki-operator in pod control-plane-pki-operator-bcc5d89dd-sjvwj has a restartCount > 0 (4) util.go:780: Container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-868f948c6f-pz2zm has a restartCount > 0 (6) util.go:777: Leader election failure detected in container hosted-cluster-config-operator in pod hosted-cluster-config-operator-55fbf578d8-g9mth
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut
28m43.15s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-ntomachineconfig-replace in 17m39.025s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-ntomachineconfig-replace to have correct status in 8.45s util.go:481: Successfully waited for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-ntomachineconfig-replace to start config update in 15s util.go:497: Successfully waited for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-ntomachineconfig-replace to finish config update in 10m40s nodepool_machineconfig_test.go:165: Successfully waited for all pods in the DaemonSet kube-system/node-pool-tf87z-test-ntomachineconfig-replace to be ready in 0s util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-ntomachineconfig-replace in 0s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-ntomachineconfig-replace to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut/EnsureNoCrashingPods
80ms
util.go:780: Container aws-ebs-csi-driver-operator in pod aws-ebs-csi-driver-operator-5fbd7467d9-zkrm9 has a restartCount > 0 (6) util.go:777: Leader election failure detected in container manager in pod capi-provider-7759d869f8-dpf4x util.go:780: Container manager in pod cluster-api-5865484479-6p7tn has a restartCount > 0 (8) util.go:780: Container cluster-storage-operator in pod cluster-storage-operator-6bb677db98-lspfm has a restartCount > 0 (6) util.go:777: Leader election failure detected in container control-plane-operator in pod control-plane-operator-689788999f-pqjfn util.go:780: Container control-plane-pki-operator in pod control-plane-pki-operator-bcc5d89dd-sjvwj has a restartCount > 0 (4) util.go:780: Container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-868f948c6f-pz2zm has a restartCount > 0 (6) util.go:777: Leader election failure detected in container hosted-cluster-config-operator in pod hosted-cluster-config-operator-55fbf578d8-g9mth
TestNodePool/HostedCluster0/Main/TestNodePoolAutoRepair
45m0.01s
util.go:565: Failed to wait for 1 nodes to become ready for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-autorepair in 45m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 45m0s eventually.go:400: - observed **v1.Node collection invalid: expected 1 nodes, got 0
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout
51m29.81s
nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-machineconfig in 26m6s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-machineconfig to have correct status in 8.525s util.go:481: Successfully waited for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-machineconfig to start config update in 15.025s eventually.go:104: Failed to get *v1beta1.NodePool: client rate limiter Wait returned an error: context deadline exceeded util.go:497: Failed to wait for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-machineconfig to finish config update in 25m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-machineconfig invalid at RV 72445 after 25m0s: incorrect condition: wanted UpdatingConfig=False, got UpdatingConfig=True: AsExpected(Updating config in progress. Target config: 3c696a20) util.go:497: *v1beta1.NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-machineconfig conditions: util.go:497: AutoscalingEnabled=False: AsExpected util.go:497: UpdateManagementEnabled=True: AsExpected util.go:497: ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-3sg259px/release@sha256:3505fefac8257be59d944a72c810bb2dec8ce0d9b6b8b286d1daac00044623a1) util.go:497: ValidArchPlatform=True: AsExpected util.go:497: ReconciliationActive=True: ReconciliationActive(Reconciliation active on resource) util.go:497: SupportedVersionSkew=True: AsExpected(Release image version is valid) util.go:497: ValidMachineConfig=True: AsExpected util.go:497: UpdatingConfig=True: AsExpected(Updating config in progress. Target config: 3c696a20) util.go:497: UpdatingVersion=False: AsExpected util.go:497: ValidGeneratedPayload=Unknown(Unable to get status data from token secret) util.go:497: ReachedIgnitionEndpoint=False: ignitionNotReached util.go:497: AllMachinesReady=True: AsExpected(All is well) util.go:497: AllNodesHealthy=False: NodeProvisioning(Machine node-pool-tf87z-test-machineconfig-2sqcv-pp55d: NodeProvisioning ) util.go:497: ValidPlatformConfig=True: AsExpected(All is well) util.go:497: ValidPlatformImage=True: AsExpected(Bootstrap AMI is "ami-01095d1967818437c") util.go:497: AWSSecurityGroupAvailable=True: AsExpected(NodePool has a default security group) util.go:497: ValidTuningConfig=True: AsExpected util.go:497: UpdatingPlatformMachineTemplate=False: AsExpected util.go:497: AutorepairEnabled=False: AsExpected util.go:497: Ready=True: AsExpected
TestNodePool/HostedCluster0/Main/TestRollingUpgrade
56m23.3s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-rolling-upgrade in 26m9s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-rolling-upgrade to have correct status in 8.55s nodepool_rolling_upgrade_test.go:106: Successfully waited for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-rolling-upgrade to start the rolling upgrade in 5.525s eventually.go:104: Failed to get *v1beta1.NodePool: client rate limiter Wait returned an error: context deadline exceeded nodepool_rolling_upgrade_test.go:120: Failed to wait for NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-rolling-upgrade to finish the rolling upgrade in 30m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-rolling-upgrade invalid at RV 73990 after 30m0s: incorrect condition: wanted UpdatingPlatformMachineTemplate=False, got UpdatingPlatformMachineTemplate=True: AsExpected(platform machine template update in progress. Target template: node-pool-tf87z-test-rolling-upgrade-7945adf2) nodepool_rolling_upgrade_test.go:120: *v1beta1.NodePool e2e-clusters-bhwj5/node-pool-tf87z-test-rolling-upgrade conditions: nodepool_rolling_upgrade_test.go:120: AutoscalingEnabled=False: AsExpected nodepool_rolling_upgrade_test.go:120: UpdateManagementEnabled=True: AsExpected nodepool_rolling_upgrade_test.go:120: ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-3sg259px/release@sha256:3505fefac8257be59d944a72c810bb2dec8ce0d9b6b8b286d1daac00044623a1) nodepool_rolling_upgrade_test.go:120: ValidArchPlatform=True: AsExpected nodepool_rolling_upgrade_test.go:120: ReconciliationActive=True: ReconciliationActive(Reconciliation active on resource) nodepool_rolling_upgrade_test.go:120: SupportedVersionSkew=True: AsExpected(Release image version is valid) nodepool_rolling_upgrade_test.go:120: ValidMachineConfig=True: AsExpected nodepool_rolling_upgrade_test.go:120: UpdatingConfig=False: AsExpected nodepool_rolling_upgrade_test.go:120: UpdatingVersion=False: AsExpected nodepool_rolling_upgrade_test.go:120: ValidGeneratedPayload=True: AsExpected(Payload generated successfully) nodepool_rolling_upgrade_test.go:120: ReachedIgnitionEndpoint=True: AsExpected nodepool_rolling_upgrade_test.go:120: AllMachinesReady=True: AsExpected(All is well) nodepool_rolling_upgrade_test.go:120: AllNodesHealthy=False: NodeProvisioning(Machine node-pool-tf87z-test-rolling-upgrade-dwgjs-gk9hs: NodeProvisioning ) nodepool_rolling_upgrade_test.go:120: ValidPlatformConfig=True: AsExpected(All is well) nodepool_rolling_upgrade_test.go:120: ValidPlatformImage=True: AsExpected(Bootstrap AMI is "ami-01095d1967818437c") nodepool_rolling_upgrade_test.go:120: AWSSecurityGroupAvailable=True: AsExpected(NodePool has a default security group) nodepool_rolling_upgrade_test.go:120: ValidTuningConfig=True: AsExpected nodepool_rolling_upgrade_test.go:120: UpdatingPlatformMachineTemplate=True: AsExpected(platform machine template update in progress. Target template: node-pool-tf87z-test-rolling-upgrade-7945adf2) nodepool_rolling_upgrade_test.go:120: AutorepairEnabled=False: AsExpected nodepool_rolling_upgrade_test.go:120: Ready=True: AsExpected
TestNodePool/HostedCluster0/Teardown
40m11.08s
journals.go:234: Error copying machine journals to artifacts directory: exit status 1 hypershift_framework.go:505: Failed to destroy cluster, will retry: hostedcluster wasn't finalized, aborting delete: context deadline exceeded journals.go:208: No machines associated with infra id node-pool-tf87z were found. Skipping journal dump. fixture.go:321: Failed to wait for infra resources in guest cluster to be deleted: context deadline exceeded fixture.go:330: Failed to clean up 28 remaining resources for guest cluster fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-0cdf48832c571c205, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-rolling-upgrade-dwgjs-gk9hs,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-rolling-upgrade-dwgjs-gk9hs,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-04809167cb80626c4, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-rolling-upgrade-xhszg-z55gk,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-rolling-upgrade-xhszg-z55gk,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-0e8dcfb5b54fcb259, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-kms-root-volume-qjsvj-5g4km,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-kms-root-volume-qjsvj-5g4km,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-023e86cee108b9bf4, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-machineconfig-2sqcv-pp55d,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-machineconfig-2sqcv-pp55d,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-05d65c21a56b74b90, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-imagetype-b6n7l-2tc27,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-imagetype-b6n7l-2tc27,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-09c97865bc44c2992, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-ntomachineconfig-replace-4rqkk-f4h74,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-ntomachineconfig-replace-4rqkk-f4h74,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-07317a24fc40d1fa6, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-tth4k-s59x8-xv5q8,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-tth4k-s59x8-xv5q8,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-0cb4c519e14b0db30, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-rolling-upgrade-xhszg-w9nlq,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-rolling-upgrade-xhszg-w9nlq,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-0b62beed2ac6e0db1, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-r4zpt-rff9r-gw2wf,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-r4zpt-rff9r-gw2wf,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-0195cee83700e4eec, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-ntoperformanceprofile-g7knz-tvcwk,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-ntoperformanceprofile-g7knz-tvcwk,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-0baad4aff7c6ba448, tags: aws-node-termination-handler/managed=,red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-spot-termination-5qxwf-5n5mt,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-spot-termination-5qxwf-5n5mt,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-08b7817163d24ab92, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-replaceupgrade-wddkj-969t9,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-replaceupgrade-wddkj-969t9,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-0bd906695d2d1ea67, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-day2-tags-9m9wz-8s65c,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,test-day2-tag=test-day2-value,Name=node-pool-tf87z-test-day2-tags-9m9wz-8s65c,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-09e90a615e3b70d7b, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-ntomachineconfig-inplace-hr6q2,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-ntomachineconfig-inplace-hr6q2,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-0aa02c0e99d7232f7, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-ntomachineconfig-replace-sxzt7-dgth6,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-ntomachineconfig-replace-sxzt7-dgth6,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-0158a73db2fdd333d, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-replaceupgrade-r4plr-psmf8,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-replaceupgrade-r4plr-psmf8,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-034ff319c90cb08c2, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-ntomachineconfig-inplace-mt4pl,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-ntomachineconfig-inplace-mt4pl,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-0ecc85cf2b012382d, tags: aws-node-termination-handler/managed=,red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-spot-termination-5qxwf-m5xjf,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-spot-termination-5qxwf-m5xjf,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-098239f9eb6ba6e91, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-machineconfig-sg2zd-78vlp,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-machineconfig-sg2zd-78vlp,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-086e7d539b5b53746, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-ntoperformanceprofile-kr2cr-vwrj8,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-ntoperformanceprofile-kr2cr-vwrj8,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-048ddeace1c96a3f0, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-hhn8w-69fq9-4grks,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-hhn8w-69fq9-4grks,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-0fdf75a6903d47a69, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-ntomachineconfig-replace-sxzt7-cfr8j,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-ntomachineconfig-replace-sxzt7-cfr8j,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-04f8b2ebd038ba5a0, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-75gv6-ldf8k-j2hm4,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-75gv6-ldf8k-j2hm4,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-09d77cb982def7676, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-inplaceupgrade-w5qmd,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-inplaceupgrade-w5qmd,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:s3:::node-pool-tf87z-image-registry-us-east-1-kstvimqpgbdhmquwpwovh, tags: red-hat-clustertype=rosa,red-hat-managed=true,Name=node-pool-tf87z-image-registry,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: s3 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-0b0f5b6a91e67ebb9, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-ntomachineconfig-replace-4rqkk-r6b4h,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-ntomachineconfig-replace-4rqkk-r6b4h,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-02c2266c38d0bf8a7, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-mirrorconfigs-n2k6d-rxv4k,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-mirrorconfigs-n2k6d-rxv4k,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 fixture.go:337: Resource: arn:aws:ec2:us-east-1:820196288204:volume/vol-0e416a4dde61a77a8, tags: red-hat-clustertype=rosa,MachineName=e2e-clusters-bhwj5-node-pool-tf87z/node-pool-tf87z-test-autorepair-fbml9-w2t4s,sigs.k8s.io/cluster-api-provider-aws/cluster/node-pool-tf87z=owned,sigs.k8s.io/cluster-api-provider-aws/role=node,red-hat-managed=true,Name=node-pool-tf87z-test-autorepair-fbml9-w2t4s,expirationDate=2026-02-24T15:39+00:00,kubernetes.io/cluster/node-pool-tf87z=owned, service: ec2 hypershift_framework.go:520: Destroyed cluster. Namespace: e2e-clusters-bhwj5, name: node-pool-tf87z hypershift_framework.go:475: archiving /logs/artifacts/TestNodePool_HostedCluster0/hostedcluster-node-pool-tf87z to /logs/artifacts/TestNodePool_HostedCluster0/hostedcluster.tar.gz
TestNodePoolAutoscalingScaleFromZero
1h0m3.23s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-8zgjh/scale-from-zero-47k5w in 5m8s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster scale-from-zero-47k5w util.go:2974: Successfully waited for HostedCluster e2e-clusters-8zgjh/scale-from-zero-47k5w to have valid conditions in 0s hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestNodePoolAutoscalingScaleFromZero/ValidateHostedCluster
33m18.42s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8zgjh/scale-from-zero-47k5w in 48s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-scale-from-zero-47k5w.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-scale-from-zero-47k5w.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-scale-from-zero-47k5w.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 50.19.123.141:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-scale-from-zero-47k5w.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.42.111:443: i/o timeout util.go:370: Successfully waited for a successful connection to the guest API server in 2m19.1s util.go:565: Successfully waited for 1 nodes to become ready in 22m33s util.go:598: Successfully waited for HostedCluster e2e-clusters-8zgjh/scale-from-zero-47k5w to rollout in 7m33s util.go:2974: Successfully waited for HostedCluster e2e-clusters-8zgjh/scale-from-zero-47k5w to have valid conditions in 0s
TestNodePoolAutoscalingScaleFromZero/ValidateHostedCluster/EnsureNoCrashingPods
40ms
util.go:780: Container manager in pod capi-provider-887fc9645-xxj92 has a restartCount > 0 (5) util.go:780: Container control-plane-operator in pod control-plane-operator-59b8b56586-9ft5c has a restartCount > 0 (4) util.go:777: Leader election failure detected in container hosted-cluster-config-operator in pod hosted-cluster-config-operator-796f6b8bc4-w99xc
TestUpgradeControlPlane
44m8.41s
control_plane_upgrade_test.go:25: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-3sg259px/release@sha256:8bacab7e1e3dac992c35519ca1f92c971b13b7d0477c895b0b95628cd818b043, toImage: registry.build01.ci.openshift.org/ci-op-3sg259px/release@sha256:3505fefac8257be59d944a72c810bb2dec8ce0d9b6b8b286d1daac00044623a1 hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-hd9gx/control-plane-upgrade-dcz4h in 12s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster control-plane-upgrade-dcz4h util.go:2974: Failed to wait for HostedCluster e2e-clusters-hd9gx/control-plane-upgrade-dcz4h to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-hd9gx/control-plane-upgrade-dcz4h invalid at RV 60536 after 2s: eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(openshift-apiserver deployment has 1 unavailable replicas) eventually.go:227: - incorrect condition: wanted AWSEndpointAvailable=True, got AWSEndpointAvailable=False: AWSError(Throttling) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestUpgradeControlPlane/ValidateHostedCluster
26m54.75s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-hd9gx/control-plane-upgrade-dcz4h in 1m9s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-dcz4h.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-dcz4h.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-dcz4h.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.194.247.94:443: i/o timeout util.go:370: Successfully waited for a successful connection to the guest API server in 2m2.15s util.go:565: Successfully waited for 2 nodes to become ready in 10m9s util.go:598: Successfully waited for HostedCluster e2e-clusters-hd9gx/control-plane-upgrade-dcz4h to rollout in 3m24.95s eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-hd9gx/hostedclusters/control-plane-upgrade-dcz4h?timeout=5m0s": net/http: TLS handshake timeout - error from a previous attempt: http2: client connection lost eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-hd9gx/hostedclusters/control-plane-upgrade-dcz4h?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-hd9gx/hostedclusters/control-plane-upgrade-dcz4h?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-hd9gx/hostedclusters/control-plane-upgrade-dcz4h?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-hd9gx/hostedclusters/control-plane-upgrade-dcz4h?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-hd9gx/hostedclusters/control-plane-upgrade-dcz4h?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-hd9gx/hostedclusters/control-plane-upgrade-dcz4h?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-hd9gx/hostedclusters/control-plane-upgrade-dcz4h?timeout=5m0s": dial tcp 18.210.239.191:6443: i/o timeout eventually.go:104: Failed to get *v1beta1.HostedCluster: Get "https://a0f65a5e3168c45d3bccafa01f167595-6b085146cbab10e6.elb.us-east-1.amazonaws.com:6443/apis/hypershift.openshift.io/v1beta1/namespaces/e2e-clusters-hd9gx/hostedclusters/control-plane-upgrade-dcz4h?timeout=5m0s": dial tcp 18.210.239.191:6443: connect: connection refused eventually.go:104: Failed to get *v1beta1.HostedCluster: the server was unable to return a response in the time allotted, but may still be processing the request (get hostedclusters.hypershift.openshift.io control-plane-upgrade-dcz4h) util.go:2974: Failed to wait for HostedCluster e2e-clusters-hd9gx/control-plane-upgrade-dcz4h to have valid conditions in 10m0s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-hd9gx/control-plane-upgrade-dcz4h invalid at RV 60536 after 10m0s: eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(openshift-apiserver deployment has 1 unavailable replicas) eventually.go:227: - incorrect condition: wanted AWSEndpointAvailable=True, got AWSEndpointAvailable=False: AWSError(Throttling)