PR #7448 - 01-09 01:14

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

284
Total Tests
64
Passed
201
Failed
19
Skipped

Failed Tests

TestAutoscaling
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-xjnk4/autoscaling-cwk44 in 27s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 in 1m45s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-cwk44.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-autoscaling-cwk44.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-cwk44.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.54.210.186:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-cwk44.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.54.210.186:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 1m16.45s util.go:565: Successfully waited for 1 nodes to become ready in 7m39s util.go:598: Successfully waited for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 to rollout in 3m45s util.go:2949: Successfully waited for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 to have valid conditions in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-xjnk4/autoscaling-cwk44-us-east-1c in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:565: Successfully waited for 1 nodes to become ready in 0s autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-xjnk4, name: autoscaling-cwk44-us-east-1c, min: 1, max: 3 autoscaling_test.go:137: Created workload. Node: ip-10-0-6-247.ec2.internal, memcapacity: 14746804Ki util.go:565: Successfully waited for 3 nodes to become ready in 5m30s autoscaling_test.go:157: Deleted workload
TestAutoscaling/Main
0s
TestAutoscaling/Main/TestAutoscaling
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:565: Successfully waited for 1 nodes to become ready in 0s autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-xjnk4, name: autoscaling-cwk44-us-east-1c, min: 1, max: 3 autoscaling_test.go:137: Created workload. Node: ip-10-0-6-247.ec2.internal, memcapacity: 14746804Ki util.go:565: Successfully waited for 3 nodes to become ready in 5m30s autoscaling_test.go:157: Deleted workload
TestAutoscaling/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 in 1m45s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-cwk44.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-autoscaling-cwk44.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-cwk44.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.54.210.186:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-cwk44.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.54.210.186:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 1m16.45s util.go:565: Successfully waited for 1 nodes to become ready in 7m39s util.go:598: Successfully waited for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 to rollout in 3m45s util.go:2949: Successfully waited for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 to have valid conditions in 0s
TestAutoscaling/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestAutoscaling/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestAutoscaling/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestAutoscaling/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-xjnk4/autoscaling-cwk44-us-east-1c in 25ms
TestAutoscaling/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestAutoscaling/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestCreateCluster
0s
create_cluster_test.go:2492: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-5pggm/create-cluster-6nfwh in 45s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 2m0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.205.202.159:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.201.71.7:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.205.202.159:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.201.71.7:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.205.202.159:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.201.71.7:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 2m0.025s util.go:565: Successfully waited for 3 nodes to become ready in 8m27s util.go:598: Successfully waited for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh to rollout in 7m57s util.go:2949: Successfully waited for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh to have valid conditions in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5pggm/create-cluster-6nfwh-us-east-1a in 25ms util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5pggm/create-cluster-6nfwh-us-east-1b in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5pggm/create-cluster-6nfwh-us-east-1c in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s create_cluster_test.go:2532: fetching mgmt kubeconfig util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s control_plane_pki_operator.go:95: generating new break-glass credentials for more than one signer pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018" for signer "customer-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-5pggm-create-cluster-6nfwh/o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018 to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "tk4rmhri4i9xmmq14jydrfgg6mdqcxfo5xfztwlw2kg" for signer "sre-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-5pggm-create-cluster-6nfwh/tk4rmhri4i9xmmq14jydrfgg6mdqcxfo5xfztwlw2kg to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "tk4rmhri4i9xmmq14jydrfgg6mdqcxfo5xfztwlw2kg" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username control_plane_pki_operator.go:99: revoking the "customer-break-glass" signer pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:256: creating CRR e2e-clusters-5pggm-create-cluster-6nfwh/o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018 to trigger signer certificate revocation control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-5pggm-create-cluster-6nfwh/o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018 to complete in 2m48s control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server control_plane_pki_operator.go:102: ensuring the break-glass credentials from "sre-break-glass" signer still work pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s create_cluster_test.go:2532: fetching mgmt kubeconfig util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateCluster/Main/break-glass-credentials
0s
TestCreateCluster/Main/break-glass-credentials/customer-break-glass
0s
TestCreateCluster/Main/break-glass-credentials/independent_signers
0s
control_plane_pki_operator.go:95: generating new break-glass credentials for more than one signer pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018" for signer "customer-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-5pggm-create-cluster-6nfwh/o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018 to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "tk4rmhri4i9xmmq14jydrfgg6mdqcxfo5xfztwlw2kg" for signer "sre-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-5pggm-create-cluster-6nfwh/tk4rmhri4i9xmmq14jydrfgg6mdqcxfo5xfztwlw2kg to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "tk4rmhri4i9xmmq14jydrfgg6mdqcxfo5xfztwlw2kg" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username control_plane_pki_operator.go:99: revoking the "customer-break-glass" signer pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:256: creating CRR e2e-clusters-5pggm-create-cluster-6nfwh/o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018 to trigger signer certificate revocation control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-5pggm-create-cluster-6nfwh/o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018 to complete in 2m48s control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server control_plane_pki_operator.go:102: ensuring the break-glass credentials from "sre-break-glass" signer still work pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/Main/break-glass-credentials/sre-break-glass
0s
TestCreateCluster/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 2m0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.205.202.159:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.201.71.7:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.205.202.159:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.201.71.7:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.205.202.159:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.201.71.7:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 2m0.025s util.go:565: Successfully waited for 3 nodes to become ready in 8m27s util.go:598: Successfully waited for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh to rollout in 7m57s util.go:2949: Successfully waited for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh to have valid conditions in 0s
TestCreateCluster/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestCreateCluster/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateCluster/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateCluster/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5pggm/create-cluster-6nfwh-us-east-1a in 25ms util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5pggm/create-cluster-6nfwh-us-east-1b in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5pggm/create-cluster-6nfwh-us-east-1c in 0s
TestCreateCluster/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestCreateCluster/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestCreateClusterCustomConfig
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-dpjqv/custom-config-bnbbq in 38s journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterCustomConfig/machine-journals util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 1m30.025s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.110.237:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 2m23.025s util.go:565: Successfully waited for 2 nodes to become ready in 7m12s util.go:598: Successfully waited for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq to rollout in 8m45s util.go:2949: Successfully waited for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq to have valid conditions in 0s util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-dpjqv/custom-config-bnbbq-us-east-1a in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s oauth.go:170: Found OAuth route oauth-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com oauth.go:192: Observed OAuth route oauth-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com to be healthy oauth.go:151: OAuth token retrieved successfully for user kubeadmin util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s oauth.go:170: Found OAuth route oauth-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com oauth.go:192: Observed OAuth route oauth-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com to be healthy oauth.go:151: OAuth token retrieved successfully for user testuser util.go:3459: Successfully waited for Waiting for service account default/default to be provisioned... in 0s eventually.go:104: Failed to get *v1.ServiceAccount: serviceaccounts "default" not found util.go:3482: Successfully waited for Waiting for service account default/test-namespace to be provisioned... in 10s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:3597: Checking that Tuned resource type does not exist in guest cluster util.go:3610: Checking that Profile resource type does not exist in guest cluster util.go:3622: Checking that no tuned DaemonSet exists in guest cluster util.go:3631: Checking that no tuned-related ConfigMaps exist in guest cluster util.go:3656: NodeTuning capability disabled validation completed successfully util.go:3937: Updating HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq with custom OVN internal subnets util.go:3956: Validating CNO conditions on HostedControlPlane util.go:3958: Successfully waited for HostedControlPlane e2e-clusters-dpjqv-custom-config-bnbbq/custom-config-bnbbq to have healthy CNO conditions in 0s util.go:2949: Successfully waited for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq to have valid conditions in 0s util.go:3985: Successfully waited for Network.operator.openshift.io/cluster in guest cluster to reflect the custom subnet changes in 3s util.go:4015: Successfully waited for Network.config.openshift.io/cluster in guest cluster to be available in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:3224: Successfully waited for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq to have valid Status.Payload in 0s util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443 util.go:2527: Checking that all ValidatingAdmissionPolicies are present util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterCustomConfig/EnsureHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureHCPPodsAffinitiesAndTolerations
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureNetworkPolicies
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureNetworkPolicies/EnsureComponentsHaveNeedManagementKASAccessLabel
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq to have valid Status.Payload in 0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureSATokenNotMountedUnlessNecessary
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesCheckDeniedRequests
0s
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesDontBlockStatusModifications
0s
util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesExists
0s
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
TestCreateClusterCustomConfig/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/ValidateMetricsAreExposed
0s
TestCreateClusterCustomConfig/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterCustomConfig/Main/EnsureCNOOperatorConfiguration
0s
util.go:3937: Updating HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq with custom OVN internal subnets util.go:3956: Validating CNO conditions on HostedControlPlane util.go:3958: Successfully waited for HostedControlPlane e2e-clusters-dpjqv-custom-config-bnbbq/custom-config-bnbbq to have healthy CNO conditions in 0s util.go:2949: Successfully waited for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq to have valid conditions in 0s util.go:3985: Successfully waited for Network.operator.openshift.io/cluster in guest cluster to reflect the custom subnet changes in 3s util.go:4015: Successfully waited for Network.config.openshift.io/cluster in guest cluster to be available in 0s
TestCreateClusterCustomConfig/Main/EnsureConsoleCapabilityDisabled
0s
TestCreateClusterCustomConfig/Main/EnsureImageRegistryCapabilityDisabled
0s
util.go:3459: Successfully waited for Waiting for service account default/default to be provisioned... in 0s eventually.go:104: Failed to get *v1.ServiceAccount: serviceaccounts "default" not found util.go:3482: Successfully waited for Waiting for service account default/test-namespace to be provisioned... in 10s
TestCreateClusterCustomConfig/Main/EnsureIngressCapabilityDisabled
0s
TestCreateClusterCustomConfig/Main/EnsureInsightsCapabilityDisabled
0s
TestCreateClusterCustomConfig/Main/EnsureNodeTuningCapabilityDisabled
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:3597: Checking that Tuned resource type does not exist in guest cluster util.go:3610: Checking that Profile resource type does not exist in guest cluster util.go:3622: Checking that no tuned DaemonSet exists in guest cluster util.go:3631: Checking that no tuned-related ConfigMaps exist in guest cluster util.go:3656: NodeTuning capability disabled validation completed successfully
TestCreateClusterCustomConfig/Main/EnsureOAuthWithIdentityProvider
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s oauth.go:170: Found OAuth route oauth-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com oauth.go:192: Observed OAuth route oauth-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com to be healthy oauth.go:151: OAuth token retrieved successfully for user kubeadmin util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s oauth.go:170: Found OAuth route oauth-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com oauth.go:192: Observed OAuth route oauth-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com to be healthy oauth.go:151: OAuth token retrieved successfully for user testuser
TestCreateClusterCustomConfig/Main/EnsureOpenshiftSamplesCapabilityDisabled
0s
TestCreateClusterCustomConfig/Main/EnsureSecretEncryptedUsingKMSV2
0s
TestCreateClusterCustomConfig/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 1m30.025s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.110.237:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 2m23.025s util.go:565: Successfully waited for 2 nodes to become ready in 7m12s util.go:598: Successfully waited for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq to rollout in 8m45s util.go:2949: Successfully waited for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq to have valid conditions in 0s
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-dpjqv/custom-config-bnbbq-us-east-1a in 25ms
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestCreateClusterCustomConfig/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestCreateClusterPrivate
24m17.24s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-xk2md/private-x9czt in 50s journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterPrivate/machine-journals fixture.go:341: SUCCESS: found no remaining guest resources hypershift_framework.go:491: Destroyed cluster. Namespace: e2e-clusters-xk2md, name: private-x9czt hypershift_framework.go:446: archiving /logs/artifacts/TestCreateClusterPrivate/hostedcluster-private-x9czt to /logs/artifacts/TestCreateClusterPrivate/hostedcluster.tar.gz util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xk2md/private-x9czt in 2m48s util.go:301: Successfully waited for kubeconfig secret to have data in 25ms util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-xk2md/private-x9czt to have all of their desired nodes in 9m0s util.go:598: Successfully waited for HostedCluster e2e-clusters-xk2md/private-x9czt to rollout in 4m3s util.go:2949: Successfully waited for HostedCluster e2e-clusters-xk2md/private-x9czt to have valid conditions in 0s
TestCreateClusterPrivate/EnsureHostedCluster
2.92s
TestCreateClusterPrivate/EnsureHostedCluster/ValidateMetricsAreExposed
220ms
TestCreateClusterPrivateWithRouteKAS
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-z4x9x/private-vdmfl in 49s journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterPrivateWithRouteKAS/machine-journals util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-z4x9x/private-vdmfl in 1m54s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-z4x9x/private-vdmfl to have all of their desired nodes in 9m36s util.go:598: Successfully waited for HostedCluster e2e-clusters-z4x9x/private-vdmfl to rollout in 4m0s util.go:2949: Successfully waited for HostedCluster e2e-clusters-z4x9x/private-vdmfl to have valid conditions in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-z4x9x/private-vdmfl in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-z4x9x/private-vdmfl in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com:443 util.go:420: Waiting for guest kubeconfig host to resolve to public address util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:437: kubeconfig host now resolves to public address util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-z4x9x/private-vdmfl in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com:443 util.go:420: Waiting for guest kubeconfig host to resolve to private address util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:432: kubeconfig host now resolves to private address util.go:3224: Successfully waited for HostedCluster e2e-clusters-z4x9x/private-vdmfl to have valid Status.Payload in 0s util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureHCPPodsAffinitiesAndTolerations
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureNetworkPolicies
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureNetworkPolicies/EnsureComponentsHaveNeedManagementKASAccessLabel
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-z4x9x/private-vdmfl to have valid Status.Payload in 0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureSATokenNotMountedUnlessNecessary
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/ValidateMetricsAreExposed
0s
TestCreateClusterPrivateWithRouteKAS/Main
0s
TestCreateClusterPrivateWithRouteKAS/Main/SwitchFromPrivateToPublic
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-z4x9x/private-vdmfl in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com:443 util.go:420: Waiting for guest kubeconfig host to resolve to public address util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:437: kubeconfig host now resolves to public address
TestCreateClusterPrivateWithRouteKAS/Main/SwitchFromPublicToPrivate
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-z4x9x/private-vdmfl in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com:443 util.go:420: Waiting for guest kubeconfig host to resolve to private address util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:432: kubeconfig host now resolves to private address
TestCreateClusterPrivateWithRouteKAS/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-z4x9x/private-vdmfl in 1m54s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-z4x9x/private-vdmfl to have all of their desired nodes in 9m36s util.go:598: Successfully waited for HostedCluster e2e-clusters-z4x9x/private-vdmfl to rollout in 4m0s util.go:2949: Successfully waited for HostedCluster e2e-clusters-z4x9x/private-vdmfl to have valid conditions in 0s
TestCreateClusterPrivateWithRouteKAS/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-z4x9x/private-vdmfl in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterPrivateWithRouteKAS/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestCreateClusterProxy
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-24pbp/proxy-k6t52 in 36s journals.go:234: Error copying machine journals to artifacts directory: exit status 1 util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-24pbp/proxy-k6t52 in 1m45s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-k6t52.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-proxy-k6t52.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-k6t52.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.221.29.103:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-k6t52.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.212.58.123:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m33.025s util.go:565: Successfully waited for 2 nodes to become ready in 8m9s util.go:598: Successfully waited for HostedCluster e2e-clusters-24pbp/proxy-k6t52 to rollout in 6m36s util.go:2949: Successfully waited for HostedCluster e2e-clusters-24pbp/proxy-k6t52 to have valid conditions in 0s util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-24pbp/proxy-k6t52-us-east-1a in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-24pbp/proxy-k6t52 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-24pbp/proxy-k6t52 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-24pbp/proxy-k6t52 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:3224: Successfully waited for HostedCluster e2e-clusters-24pbp/proxy-k6t52 to have valid Status.Payload in 0s util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443 util.go:2527: Checking that all ValidatingAdmissionPolicies are present util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterProxy/EnsureHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-24pbp/proxy-k6t52 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureHCPPodsAffinitiesAndTolerations
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureNetworkPolicies
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureNetworkPolicies/EnsureComponentsHaveNeedManagementKASAccessLabel
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
TestCreateClusterProxy/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-24pbp/proxy-k6t52 to have valid Status.Payload in 0s
TestCreateClusterProxy/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureSATokenNotMountedUnlessNecessary
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesCheckDeniedRequests
0s
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
TestCreateClusterProxy/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesDontBlockStatusModifications
0s
util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterProxy/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesExists
0s
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
TestCreateClusterProxy/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
TestCreateClusterProxy/EnsureHostedCluster/ValidateMetricsAreExposed
0s
TestCreateClusterProxy/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-24pbp/proxy-k6t52 in 1m45s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-k6t52.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-proxy-k6t52.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-k6t52.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.221.29.103:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-k6t52.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.212.58.123:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m33.025s util.go:565: Successfully waited for 2 nodes to become ready in 8m9s util.go:598: Successfully waited for HostedCluster e2e-clusters-24pbp/proxy-k6t52 to rollout in 6m36s util.go:2949: Successfully waited for HostedCluster e2e-clusters-24pbp/proxy-k6t52 to have valid conditions in 0s
TestCreateClusterProxy/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestCreateClusterProxy/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-24pbp/proxy-k6t52 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterProxy/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-24pbp/proxy-k6t52 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterProxy/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-24pbp/proxy-k6t52-us-east-1a in 25ms
TestCreateClusterProxy/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestCreateClusterProxy/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestCreateClusterRequestServingIsolation
0s
requestserving.go:105: Created request serving nodepool clusters/4f3d44b3c98c7229cf50-mgmt-reqserving-gbt5q requestserving.go:105: Created request serving nodepool clusters/4f3d44b3c98c7229cf50-mgmt-reqserving-xs5f8 requestserving.go:113: Created non request serving nodepool clusters/4f3d44b3c98c7229cf50-mgmt-non-reqserving-rjhs9 requestserving.go:113: Created non request serving nodepool clusters/4f3d44b3c98c7229cf50-mgmt-non-reqserving-9rk28 requestserving.go:113: Created non request serving nodepool clusters/4f3d44b3c98c7229cf50-mgmt-non-reqserving-wjfwz util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/4f3d44b3c98c7229cf50-mgmt-reqserving-gbt5q in 3m33s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/4f3d44b3c98c7229cf50-mgmt-reqserving-xs5f8 in 57s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/4f3d44b3c98c7229cf50-mgmt-non-reqserving-rjhs9 in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/4f3d44b3c98c7229cf50-mgmt-non-reqserving-9rk28 in 100ms util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/4f3d44b3c98c7229cf50-mgmt-non-reqserving-wjfwz in 42s create_cluster_test.go:2670: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 27s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 2m3s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-pfcwc.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-request-serving-isolation-pfcwc.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-pfcwc.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.44.42.22:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m6.15s util.go:565: Successfully waited for 3 nodes to become ready in 8m18s util.go:598: Successfully waited for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc to rollout in 6m3s util.go:2949: Successfully waited for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc to have valid conditions in 30s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-29smm/request-serving-isolation-pfcwc-us-east-1a in 25ms util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-29smm/request-serving-isolation-pfcwc-us-east-1b in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-29smm/request-serving-isolation-pfcwc-us-east-1c in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:3224: Successfully waited for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc to have valid Status.Payload in 0s util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443 util.go:2527: Checking that all ValidatingAdmissionPolicies are present util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterRequestServingIsolation/EnsureHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureHCPPodsAffinitiesAndTolerations
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureNetworkPolicies
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureNetworkPolicies/EnsureComponentsHaveNeedManagementKASAccessLabel
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc to have valid Status.Payload in 0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureSATokenNotMountedUnlessNecessary
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesCheckDeniedRequests
0s
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesDontBlockStatusModifications
0s
util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesExists
0s
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/ValidateMetricsAreExposed
0s
TestCreateClusterRequestServingIsolation/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestCreateClusterRequestServingIsolation/Main/EnsurePSANotPrivileged
0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 2m3s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-pfcwc.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-request-serving-isolation-pfcwc.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-pfcwc.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.44.42.22:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m6.15s util.go:565: Successfully waited for 3 nodes to become ready in 8m18s util.go:598: Successfully waited for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc to rollout in 6m3s util.go:2949: Successfully waited for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc to have valid conditions in 30s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-29smm/request-serving-isolation-pfcwc-us-east-1a in 25ms util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-29smm/request-serving-isolation-pfcwc-us-east-1b in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-29smm/request-serving-isolation-pfcwc-us-east-1c in 0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestNodePool
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-6s5sx/node-pool-lwnv4 in 26s nodepool_test.go:150: tests only supported on platform KubeVirt hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-5zhpc/node-pool-jxk2c in 28s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6s5sx/node-pool-lwnv4 in 1m36.025s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-lwnv4.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-lwnv4.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-lwnv4.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.95.159.19:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-lwnv4.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.198.239.251:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m30.025s util.go:565: Successfully waited for 0 nodes to become ready in 0s util.go:2949: Successfully waited for HostedCluster e2e-clusters-6s5sx/node-pool-lwnv4 to have valid conditions in 2m30s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5zhpc/node-pool-jxk2c in 1m42s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.79.172:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.225.156.83:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.225.156.83:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.79.172:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.225.156.83:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.205.217.52:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.79.172:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 2m9.025s util.go:565: Successfully waited for 0 nodes to become ready in 25ms util.go:2949: Successfully waited for HostedCluster e2e-clusters-5zhpc/node-pool-jxk2c to have valid conditions in 2m15s util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-us-east-1b in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6s5sx/node-pool-lwnv4 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6s5sx/node-pool-lwnv4 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s nodepool_kms_root_volume_test.go:42: Starting test KMSRootVolumeTest nodepool_kms_root_volume_test.go:54: retrieved KMS ARN: arn:aws:kms:us-east-1:820196288204:key/d3cdd9e0-3fd1-47a4-a559-72ae3672c5a6 util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-autorepair in 9m54s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-autorepair to have correct status in 0s nodepool_autorepair_test.go:65: Terminating AWS Instance with a autorepair NodePool nodepool_autorepair_test.go:70: Terminating AWS instance: i-053093dc280d96f8f nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-machineconfig in 9m57s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-machineconfig to have correct status in 0s util.go:474: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-machineconfig to start config update in 15s nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-rolling-upgrade in 6m42s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-rolling-upgrade to have correct status in 0s nodepool_rolling_upgrade_test.go:106: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-rolling-upgrade to start the rolling upgrade in 3s nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-ntoperformanceprofile in 6m45s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-ntoperformanceprofile to have correct status in 0s nodepool_nto_performanceprofile_test.go:80: Entering NTO PerformanceProfile test nodepool_nto_performanceprofile_test.go:110: Hosted control plane namespace is e2e-clusters-6s5sx-node-pool-lwnv4 nodepool_nto_performanceprofile_test.go:112: Successfully waited for performance profile ConfigMap to exist with correct name labels and annotations in 3s nodepool_nto_performanceprofile_test.go:159: Successfully waited for performance profile status ConfigMap to exist in 0s nodepool_nto_performanceprofile_test.go:201: Successfully waited for performance profile status to be reflected under the NodePool status in 0s nodepool_nto_performanceprofile_test.go:254: Deleting configmap reference from nodepool ... nodepool_nto_performanceprofile_test.go:261: Successfully waited for performance profile ConfigMap to be deleted in 3s nodepool_nto_performanceprofile_test.go:280: Ending NTO PerformanceProfile test: OK nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-ntoperformanceprofile to have correct status in 30s nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-wvn7j in 12m48s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-wvn7j to have correct status in 0s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-wvn7j to have correct status in 0s nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. nodepool_test.go:348: NodePool version is outside supported skew, validating condition only (skipping node readiness check) nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-x5ctv to have correct status in 3s nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest nodepool_imagetype_test.go:50: Starting test NodePoolImageTypeTest util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-5zhpc/node-pool-jxk2c-us-east-1c in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5zhpc/node-pool-jxk2c in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5zhpc/node-pool-jxk2c in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster0
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-6s5sx/node-pool-lwnv4 in 26s
TestNodePool/HostedCluster0/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6s5sx/node-pool-lwnv4 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster0/Main/KubeVirtCacheTest
0s
nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtJsonPatchTest
0s
nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtNodeMultinetTest
0s
nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtNodeSelectorTest
0s
nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtQoSClassGuaranteedTest
0s
nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/OpenStackAdvancedTest
0s
nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack
TestNodePool/HostedCluster0/Main/TestImageTypes
0s
nodepool_imagetype_test.go:50: Starting test NodePoolImageTypeTest
TestNodePool/HostedCluster0/Main/TestKMSRootVolumeEncryption
0s
nodepool_kms_root_volume_test.go:42: Starting test KMSRootVolumeTest nodepool_kms_root_volume_test.go:54: retrieved KMS ARN: arn:aws:kms:us-east-1:820196288204:key/d3cdd9e0-3fd1-47a4-a559-72ae3672c5a6
TestNodePool/HostedCluster0/Main/TestMirrorConfigs
0s
nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace
0s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut
0s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
TestNodePool/HostedCluster0/Main/TestNTOPerformanceProfile
0s
nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-ntoperformanceprofile in 6m45s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-ntoperformanceprofile to have correct status in 0s nodepool_nto_performanceprofile_test.go:80: Entering NTO PerformanceProfile test nodepool_nto_performanceprofile_test.go:110: Hosted control plane namespace is e2e-clusters-6s5sx-node-pool-lwnv4 nodepool_nto_performanceprofile_test.go:112: Successfully waited for performance profile ConfigMap to exist with correct name labels and annotations in 3s nodepool_nto_performanceprofile_test.go:159: Successfully waited for performance profile status ConfigMap to exist in 0s nodepool_nto_performanceprofile_test.go:201: Successfully waited for performance profile status to be reflected under the NodePool status in 0s nodepool_nto_performanceprofile_test.go:254: Deleting configmap reference from nodepool ... nodepool_nto_performanceprofile_test.go:261: Successfully waited for performance profile ConfigMap to be deleted in 3s nodepool_nto_performanceprofile_test.go:280: Ending NTO PerformanceProfile test: OK nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-ntoperformanceprofile to have correct status in 30s
TestNodePool/HostedCluster0/Main/TestNodePoolAutoRepair
0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-autorepair in 9m54s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-autorepair to have correct status in 0s nodepool_autorepair_test.go:65: Terminating AWS Instance with a autorepair NodePool nodepool_autorepair_test.go:70: Terminating AWS instance: i-053093dc280d96f8f
TestNodePool/HostedCluster0/Main/TestNodePoolDay2Tags
0s
TestNodePool/HostedCluster0/Main/TestNodePoolInPlaceUpgrade
0s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN1
0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN2
0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-wvn7j in 12m48s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-wvn7j to have correct status in 0s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-wvn7j to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN3
0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN4
0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. nodepool_test.go:348: NodePool version is outside supported skew, validating condition only (skipping node readiness check) nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-x5ctv to have correct status in 3s
TestNodePool/HostedCluster0/Main/TestNodePoolReplaceUpgrade
0s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout
0s
nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-machineconfig in 9m57s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-machineconfig to have correct status in 0s util.go:474: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-machineconfig to start config update in 15s
TestNodePool/HostedCluster0/Main/TestRollingUpgrade
0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-rolling-upgrade in 6m42s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-rolling-upgrade to have correct status in 0s nodepool_rolling_upgrade_test.go:106: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-rolling-upgrade to start the rolling upgrade in 3s
TestNodePool/HostedCluster0/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6s5sx/node-pool-lwnv4 in 1m36.025s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-lwnv4.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-lwnv4.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-lwnv4.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.95.159.19:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-lwnv4.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.198.239.251:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m30.025s util.go:565: Successfully waited for 0 nodes to become ready in 0s util.go:2949: Successfully waited for HostedCluster e2e-clusters-6s5sx/node-pool-lwnv4 to have valid conditions in 2m30s
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6s5sx/node-pool-lwnv4 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-us-east-1b in 25ms
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestNodePool/HostedCluster0/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestNodePool/HostedCluster1
0s
nodepool_test.go:150: tests only supported on platform KubeVirt
TestNodePool/HostedCluster2
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-5zhpc/node-pool-jxk2c in 28s
TestNodePool/HostedCluster2/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5zhpc/node-pool-jxk2c in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation
0s
TestNodePool/HostedCluster2/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5zhpc/node-pool-jxk2c in 1m42s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.79.172:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.225.156.83:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.225.156.83:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.79.172:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.225.156.83:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.205.217.52:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.79.172:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 2m9.025s util.go:565: Successfully waited for 0 nodes to become ready in 25ms util.go:2949: Successfully waited for HostedCluster e2e-clusters-5zhpc/node-pool-jxk2c to have valid conditions in 2m15s
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5zhpc/node-pool-jxk2c in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-5zhpc/node-pool-jxk2c-us-east-1c in 25ms
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestNodePool/HostedCluster2/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestUpgradeControlPlane
0s
control_plane_upgrade_test.go:25: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-gll2w6iq/release@sha256:1791cec1bd6882825904d2d2c135d668576192bfe610f267741116db9795d984, toImage: registry.build01.ci.openshift.org/ci-op-gll2w6iq/release@sha256:45b9a6649d7f4418c1b97767dc4cd2853b7d412de2db90a974eb319999aa510e hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 22s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 2m0.025s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.239.60.34:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.239.60.34:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.239.60.34:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.239.60.34:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 3m15.025s util.go:565: Successfully waited for 2 nodes to become ready in 7m21s util.go:598: Successfully waited for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn to rollout in 3m42s util.go:2949: Successfully waited for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn to have valid conditions in 0s util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-czjrc/control-plane-upgrade-frbgn-us-east-1b in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-gll2w6iq/release@sha256:45b9a6649d7f4418c1b97767dc4cd2853b7d412de2db90a974eb319999aa510e
TestUpgradeControlPlane/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-gll2w6iq/release@sha256:45b9a6649d7f4418c1b97767dc4cd2853b7d412de2db90a974eb319999aa510e
TestUpgradeControlPlane/Main/Wait_for_control_plane_components_to_complete_rollout
0s
TestUpgradeControlPlane/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 2m0.025s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.239.60.34:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.239.60.34:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.239.60.34:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.239.60.34:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 3m15.025s util.go:565: Successfully waited for 2 nodes to become ready in 7m21s util.go:598: Successfully waited for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn to rollout in 3m42s util.go:2949: Successfully waited for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn to have valid conditions in 0s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-czjrc/control-plane-upgrade-frbgn-us-east-1b in 25ms
TestUpgradeControlPlane/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestUpgradeControlPlane/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster