PR #7455 - 01-12 13:43

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

293
Total Tests
65
Passed
210
Failed
18
Skipped

Failed Tests

TestAutoscaling
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-xhngb/autoscaling-794mq in 29s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xhngb/autoscaling-794mq in 1m24s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-794mq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-autoscaling-794mq.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-794mq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.213.153.188:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m52.025s util.go:565: Successfully waited for 1 nodes to become ready in 7m6s util.go:598: Successfully waited for HostedCluster e2e-clusters-xhngb/autoscaling-794mq to rollout in 3m48s util.go:2949: Successfully waited for HostedCluster e2e-clusters-xhngb/autoscaling-794mq to have valid conditions in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-xhngb/autoscaling-794mq-us-east-1a in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xhngb/autoscaling-794mq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xhngb/autoscaling-794mq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xhngb/autoscaling-794mq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:565: Successfully waited for 1 nodes to become ready in 0s autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-xhngb, name: autoscaling-794mq-us-east-1a, min: 1, max: 3 autoscaling_test.go:137: Created workload. Node: ip-10-0-11-52.ec2.internal, memcapacity: 14746808Ki util.go:565: Successfully waited for 3 nodes to become ready in 5m12s autoscaling_test.go:157: Deleted workload
TestAutoscaling/Main
0s
TestAutoscaling/Main/TestAutoscaling
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xhngb/autoscaling-794mq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:565: Successfully waited for 1 nodes to become ready in 0s autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-xhngb, name: autoscaling-794mq-us-east-1a, min: 1, max: 3 autoscaling_test.go:137: Created workload. Node: ip-10-0-11-52.ec2.internal, memcapacity: 14746808Ki util.go:565: Successfully waited for 3 nodes to become ready in 5m12s autoscaling_test.go:157: Deleted workload
TestAutoscaling/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xhngb/autoscaling-794mq in 1m24s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-794mq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-autoscaling-794mq.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-794mq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.213.153.188:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m52.025s util.go:565: Successfully waited for 1 nodes to become ready in 7m6s util.go:598: Successfully waited for HostedCluster e2e-clusters-xhngb/autoscaling-794mq to rollout in 3m48s util.go:2949: Successfully waited for HostedCluster e2e-clusters-xhngb/autoscaling-794mq to have valid conditions in 0s
TestAutoscaling/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestAutoscaling/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xhngb/autoscaling-794mq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestAutoscaling/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xhngb/autoscaling-794mq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestAutoscaling/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-xhngb/autoscaling-794mq-us-east-1a in 25ms
TestAutoscaling/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestAutoscaling/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestCreateCluster
0s
create_cluster_test.go:2492: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-7q67p/create-cluster-bhhr5 in 1m4s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7q67p/create-cluster-bhhr5 in 1m48s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-bhhr5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-create-cluster-bhhr5.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-bhhr5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.219.22.9:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-bhhr5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.227.247.94:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m41.45s util.go:565: Successfully waited for 3 nodes to become ready in 7m57.025s util.go:598: Successfully waited for HostedCluster e2e-clusters-7q67p/create-cluster-bhhr5 to rollout in 3m45s util.go:2949: Successfully waited for HostedCluster e2e-clusters-7q67p/create-cluster-bhhr5 to have valid conditions in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7q67p/create-cluster-bhhr5-us-east-1a in 25ms util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7q67p/create-cluster-bhhr5-us-east-1b in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7q67p/create-cluster-bhhr5-us-east-1c in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7q67p/create-cluster-bhhr5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7q67p/create-cluster-bhhr5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7q67p/create-cluster-bhhr5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s create_cluster_test.go:2532: fetching mgmt kubeconfig util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7q67p/create-cluster-bhhr5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s control_plane_pki_operator.go:95: generating new break-glass credentials for more than one signer pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "1r6eayd9iws1q5n6jw88lwg1zcz7lsf5lj31hjqnrpp4" for signer "customer-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-7q67p-create-cluster-bhhr5/1r6eayd9iws1q5n6jw88lwg1zcz7lsf5lj31hjqnrpp4 to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "1r6eayd9iws1q5n6jw88lwg1zcz7lsf5lj31hjqnrpp4" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "15qiv7wolrh5q6tt5iye53515qenk2h77ienz2klu583" for signer "sre-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-7q67p-create-cluster-bhhr5/15qiv7wolrh5q6tt5iye53515qenk2h77ienz2klu583 to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "15qiv7wolrh5q6tt5iye53515qenk2h77ienz2klu583" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username control_plane_pki_operator.go:99: revoking the "customer-break-glass" signer pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:256: creating CRR e2e-clusters-7q67p-create-cluster-bhhr5/1r6eayd9iws1q5n6jw88lwg1zcz7lsf5lj31hjqnrpp4 to trigger signer certificate revocation control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-7q67p-create-cluster-bhhr5/1r6eayd9iws1q5n6jw88lwg1zcz7lsf5lj31hjqnrpp4 to complete in 2m9s control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server control_plane_pki_operator.go:102: ensuring the break-glass credentials from "sre-break-glass" signer still work pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-7q67p-create-cluster-bhhr5/customer-system-admin-client-cert-key control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "1g0fdksn9n3uli0qemrf8ilqajeoiamgdasno3p6e6r2" for signer "customer-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-7q67p-create-cluster-bhhr5/1g0fdksn9n3uli0qemrf8ilqajeoiamgdasno3p6e6r2 to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "1g0fdksn9n3uli0qemrf8ilqajeoiamgdasno3p6e6r2" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:168: creating invalid CSR "28z21u98zky7y9vbep981cc127kl74pj8mn4p8c56nvg" for signer "hypershift.openshift.io/e2e-clusters-7q67p-create-cluster-bhhr5.customer-break-glass", requesting client auth usages control_plane_pki_operator.go:178: creating CSRA e2e-clusters-7q67p-create-cluster-bhhr5/28z21u98zky7y9vbep981cc127kl74pj8mn4p8c56nvg to trigger automatic approval of the CSR control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "28z21u98zky7y9vbep981cc127kl74pj8mn4p8c56nvg" to have invalid CN exposed in status in 3s pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:256: creating CRR e2e-clusters-7q67p-create-cluster-bhhr5/2tmk0xs4aw9oml8i3woty5eg4iyrkk5o9gatcrijc6vs to trigger signer certificate revocation control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-7q67p-create-cluster-bhhr5/2tmk0xs4aw9oml8i3woty5eg4iyrkk5o9gatcrijc6vs to complete in 1m39s control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-7q67p-create-cluster-bhhr5/sre-system-admin-client-cert-key control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "10w7oyqidcdqct6ia3j7ltd3t2mnq6q9w44ebn177cgr" for signer "sre-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-7q67p-create-cluster-bhhr5/10w7oyqidcdqct6ia3j7ltd3t2mnq6q9w44ebn177cgr to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "10w7oyqidcdqct6ia3j7ltd3t2mnq6q9w44ebn177cgr" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:168: creating invalid CSR "185lexq9zt1ou5houphf5xpbii5mexsylxjwjp2nu21w" for signer "hypershift.openshift.io/e2e-clusters-7q67p-create-cluster-bhhr5.sre-break-glass", requesting client auth usages control_plane_pki_operator.go:178: creating CSRA e2e-clusters-7q67p-create-cluster-bhhr5/185lexq9zt1ou5houphf5xpbii5mexsylxjwjp2nu21w to trigger automatic approval of the CSR control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "185lexq9zt1ou5houphf5xpbii5mexsylxjwjp2nu21w" to have invalid CN exposed in status in 3s pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:256: creating CRR e2e-clusters-7q67p-create-cluster-bhhr5/2hcr4i1llskunjgo0nyxpqb8lz7n8ofwi0my8icwbd96 to trigger signer certificate revocation
TestCreateCluster/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7q67p/create-cluster-bhhr5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s create_cluster_test.go:2532: fetching mgmt kubeconfig util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7q67p/create-cluster-bhhr5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateCluster/Main/break-glass-credentials
0s
TestCreateCluster/Main/break-glass-credentials/customer-break-glass
0s
TestCreateCluster/Main/break-glass-credentials/customer-break-glass/CSR_flow
0s
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "1g0fdksn9n3uli0qemrf8ilqajeoiamgdasno3p6e6r2" for signer "customer-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-7q67p-create-cluster-bhhr5/1g0fdksn9n3uli0qemrf8ilqajeoiamgdasno3p6e6r2 to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "1g0fdksn9n3uli0qemrf8ilqajeoiamgdasno3p6e6r2" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/Main/break-glass-credentials/customer-break-glass/CSR_flow/invalid_CN_flagged_in_status
0s
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:168: creating invalid CSR "28z21u98zky7y9vbep981cc127kl74pj8mn4p8c56nvg" for signer "hypershift.openshift.io/e2e-clusters-7q67p-create-cluster-bhhr5.customer-break-glass", requesting client auth usages control_plane_pki_operator.go:178: creating CSRA e2e-clusters-7q67p-create-cluster-bhhr5/28z21u98zky7y9vbep981cc127kl74pj8mn4p8c56nvg to trigger automatic approval of the CSR control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "28z21u98zky7y9vbep981cc127kl74pj8mn4p8c56nvg" to have invalid CN exposed in status in 3s
TestCreateCluster/Main/break-glass-credentials/customer-break-glass/CSR_flow/revocation
0s
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:256: creating CRR e2e-clusters-7q67p-create-cluster-bhhr5/2tmk0xs4aw9oml8i3woty5eg4iyrkk5o9gatcrijc6vs to trigger signer certificate revocation control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-7q67p-create-cluster-bhhr5/2tmk0xs4aw9oml8i3woty5eg4iyrkk5o9gatcrijc6vs to complete in 1m39s control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
TestCreateCluster/Main/break-glass-credentials/customer-break-glass/direct_fetch
0s
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-7q67p-create-cluster-bhhr5/customer-system-admin-client-cert-key control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/Main/break-glass-credentials/independent_signers
0s
control_plane_pki_operator.go:95: generating new break-glass credentials for more than one signer pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "1r6eayd9iws1q5n6jw88lwg1zcz7lsf5lj31hjqnrpp4" for signer "customer-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-7q67p-create-cluster-bhhr5/1r6eayd9iws1q5n6jw88lwg1zcz7lsf5lj31hjqnrpp4 to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "1r6eayd9iws1q5n6jw88lwg1zcz7lsf5lj31hjqnrpp4" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "15qiv7wolrh5q6tt5iye53515qenk2h77ienz2klu583" for signer "sre-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-7q67p-create-cluster-bhhr5/15qiv7wolrh5q6tt5iye53515qenk2h77ienz2klu583 to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "15qiv7wolrh5q6tt5iye53515qenk2h77ienz2klu583" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username control_plane_pki_operator.go:99: revoking the "customer-break-glass" signer pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:256: creating CRR e2e-clusters-7q67p-create-cluster-bhhr5/1r6eayd9iws1q5n6jw88lwg1zcz7lsf5lj31hjqnrpp4 to trigger signer certificate revocation control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-7q67p-create-cluster-bhhr5/1r6eayd9iws1q5n6jw88lwg1zcz7lsf5lj31hjqnrpp4 to complete in 2m9s control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server control_plane_pki_operator.go:102: ensuring the break-glass credentials from "sre-break-glass" signer still work pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/Main/break-glass-credentials/sre-break-glass
0s
TestCreateCluster/Main/break-glass-credentials/sre-break-glass/CSR_flow
0s
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "10w7oyqidcdqct6ia3j7ltd3t2mnq6q9w44ebn177cgr" for signer "sre-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-7q67p-create-cluster-bhhr5/10w7oyqidcdqct6ia3j7ltd3t2mnq6q9w44ebn177cgr to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "10w7oyqidcdqct6ia3j7ltd3t2mnq6q9w44ebn177cgr" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/Main/break-glass-credentials/sre-break-glass/CSR_flow/invalid_CN_flagged_in_status
0s
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:168: creating invalid CSR "185lexq9zt1ou5houphf5xpbii5mexsylxjwjp2nu21w" for signer "hypershift.openshift.io/e2e-clusters-7q67p-create-cluster-bhhr5.sre-break-glass", requesting client auth usages control_plane_pki_operator.go:178: creating CSRA e2e-clusters-7q67p-create-cluster-bhhr5/185lexq9zt1ou5houphf5xpbii5mexsylxjwjp2nu21w to trigger automatic approval of the CSR control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "185lexq9zt1ou5houphf5xpbii5mexsylxjwjp2nu21w" to have invalid CN exposed in status in 3s
TestCreateCluster/Main/break-glass-credentials/sre-break-glass/CSR_flow/revocation
0s
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:256: creating CRR e2e-clusters-7q67p-create-cluster-bhhr5/2hcr4i1llskunjgo0nyxpqb8lz7n8ofwi0my8icwbd96 to trigger signer certificate revocation
TestCreateCluster/Main/break-glass-credentials/sre-break-glass/direct_fetch
0s
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-7q67p-create-cluster-bhhr5/sre-system-admin-client-cert-key control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7q67p/create-cluster-bhhr5 in 1m48s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-bhhr5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-create-cluster-bhhr5.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-bhhr5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.219.22.9:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-bhhr5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.227.247.94:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m41.45s util.go:565: Successfully waited for 3 nodes to become ready in 7m57.025s util.go:598: Successfully waited for HostedCluster e2e-clusters-7q67p/create-cluster-bhhr5 to rollout in 3m45s util.go:2949: Successfully waited for HostedCluster e2e-clusters-7q67p/create-cluster-bhhr5 to have valid conditions in 0s
TestCreateCluster/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestCreateCluster/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7q67p/create-cluster-bhhr5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateCluster/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7q67p/create-cluster-bhhr5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateCluster/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7q67p/create-cluster-bhhr5-us-east-1a in 25ms util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7q67p/create-cluster-bhhr5-us-east-1b in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7q67p/create-cluster-bhhr5-us-east-1c in 0s
TestCreateCluster/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestCreateCluster/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestCreateClusterCustomConfig
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-2bkwn/custom-config-749rg in 41s journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterCustomConfig/machine-journals util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 1m27s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.238.208:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.146.68:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.7.221.4:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.238.208:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.7.221.4:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.238.208:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.7.221.4:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.238.208:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.7.221.4:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.146.68:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 2m24.025s util.go:565: Successfully waited for 2 nodes to become ready in 7m18s util.go:598: Successfully waited for HostedCluster e2e-clusters-2bkwn/custom-config-749rg to rollout in 7m21s util.go:2949: Successfully waited for HostedCluster e2e-clusters-2bkwn/custom-config-749rg to have valid conditions in 0s util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-2bkwn/custom-config-749rg-us-east-1b in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s oauth.go:170: Found OAuth route oauth-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com oauth.go:192: Observed OAuth route oauth-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com to be healthy oauth.go:151: OAuth token retrieved successfully for user kubeadmin util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s oauth.go:170: Found OAuth route oauth-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com oauth.go:192: Observed OAuth route oauth-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com to be healthy oauth.go:151: OAuth token retrieved successfully for user testuser util.go:3459: Successfully waited for Waiting for service account default/default to be provisioned... in 0s eventually.go:104: Failed to get *v1.ServiceAccount: serviceaccounts "default" not found util.go:3482: Successfully waited for Waiting for service account default/test-namespace to be provisioned... in 10s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:3597: Checking that Tuned resource type does not exist in guest cluster util.go:3610: Checking that Profile resource type does not exist in guest cluster util.go:3622: Checking that no tuned DaemonSet exists in guest cluster util.go:3631: Checking that no tuned-related ConfigMaps exist in guest cluster util.go:3656: NodeTuning capability disabled validation completed successfully util.go:3937: Updating HostedCluster e2e-clusters-2bkwn/custom-config-749rg with custom OVN internal subnets util.go:3956: Validating CNO conditions on HostedControlPlane util.go:3958: Successfully waited for HostedControlPlane e2e-clusters-2bkwn-custom-config-749rg/custom-config-749rg to have healthy CNO conditions in 0s util.go:2949: Successfully waited for HostedCluster e2e-clusters-2bkwn/custom-config-749rg to have valid conditions in 0s util.go:3985: Successfully waited for Network.operator.openshift.io/cluster in guest cluster to reflect the custom subnet changes in 3s util.go:4015: Successfully waited for Network.config.openshift.io/cluster in guest cluster to be available in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:3224: Successfully waited for HostedCluster e2e-clusters-2bkwn/custom-config-749rg to have valid Status.Payload in 0s util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443 util.go:2527: Checking that all ValidatingAdmissionPolicies are present util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterCustomConfig/EnsureHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureHCPPodsAffinitiesAndTolerations
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureNetworkPolicies
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureNetworkPolicies/EnsureComponentsHaveNeedManagementKASAccessLabel
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-2bkwn/custom-config-749rg to have valid Status.Payload in 0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureSATokenNotMountedUnlessNecessary
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesCheckDeniedRequests
0s
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesDontBlockStatusModifications
0s
util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesExists
0s
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
TestCreateClusterCustomConfig/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/ValidateMetricsAreExposed
0s
TestCreateClusterCustomConfig/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterCustomConfig/Main/EnsureCNOOperatorConfiguration
0s
util.go:3937: Updating HostedCluster e2e-clusters-2bkwn/custom-config-749rg with custom OVN internal subnets util.go:3956: Validating CNO conditions on HostedControlPlane util.go:3958: Successfully waited for HostedControlPlane e2e-clusters-2bkwn-custom-config-749rg/custom-config-749rg to have healthy CNO conditions in 0s util.go:2949: Successfully waited for HostedCluster e2e-clusters-2bkwn/custom-config-749rg to have valid conditions in 0s util.go:3985: Successfully waited for Network.operator.openshift.io/cluster in guest cluster to reflect the custom subnet changes in 3s util.go:4015: Successfully waited for Network.config.openshift.io/cluster in guest cluster to be available in 0s
TestCreateClusterCustomConfig/Main/EnsureConsoleCapabilityDisabled
0s
TestCreateClusterCustomConfig/Main/EnsureImageRegistryCapabilityDisabled
0s
util.go:3459: Successfully waited for Waiting for service account default/default to be provisioned... in 0s eventually.go:104: Failed to get *v1.ServiceAccount: serviceaccounts "default" not found util.go:3482: Successfully waited for Waiting for service account default/test-namespace to be provisioned... in 10s
TestCreateClusterCustomConfig/Main/EnsureIngressCapabilityDisabled
0s
TestCreateClusterCustomConfig/Main/EnsureInsightsCapabilityDisabled
0s
TestCreateClusterCustomConfig/Main/EnsureNodeTuningCapabilityDisabled
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:3597: Checking that Tuned resource type does not exist in guest cluster util.go:3610: Checking that Profile resource type does not exist in guest cluster util.go:3622: Checking that no tuned DaemonSet exists in guest cluster util.go:3631: Checking that no tuned-related ConfigMaps exist in guest cluster util.go:3656: NodeTuning capability disabled validation completed successfully
TestCreateClusterCustomConfig/Main/EnsureOAuthWithIdentityProvider
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s oauth.go:170: Found OAuth route oauth-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com oauth.go:192: Observed OAuth route oauth-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com to be healthy oauth.go:151: OAuth token retrieved successfully for user kubeadmin util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s oauth.go:170: Found OAuth route oauth-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com oauth.go:192: Observed OAuth route oauth-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com to be healthy oauth.go:151: OAuth token retrieved successfully for user testuser
TestCreateClusterCustomConfig/Main/EnsureOpenshiftSamplesCapabilityDisabled
0s
TestCreateClusterCustomConfig/Main/EnsureSecretEncryptedUsingKMSV2
0s
TestCreateClusterCustomConfig/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 1m27s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.238.208:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.146.68:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.7.221.4:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.238.208:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.7.221.4:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.238.208:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.7.221.4:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.238.208:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.7.221.4:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-749rg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.146.68:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 2m24.025s util.go:565: Successfully waited for 2 nodes to become ready in 7m18s util.go:598: Successfully waited for HostedCluster e2e-clusters-2bkwn/custom-config-749rg to rollout in 7m21s util.go:2949: Successfully waited for HostedCluster e2e-clusters-2bkwn/custom-config-749rg to have valid conditions in 0s
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2bkwn/custom-config-749rg in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-2bkwn/custom-config-749rg-us-east-1b in 25ms
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestCreateClusterCustomConfig/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestCreateClusterPrivate
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-8n2wb/private-hhwqz in 20s journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterPrivate/machine-journals util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8n2wb/private-hhwqz in 1m39s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-8n2wb/private-hhwqz to have all of their desired nodes in 11m6s util.go:598: Successfully waited for HostedCluster e2e-clusters-8n2wb/private-hhwqz to rollout in 5m21s util.go:2949: Successfully waited for HostedCluster e2e-clusters-8n2wb/private-hhwqz to have valid conditions in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8n2wb/private-hhwqz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8n2wb/private-hhwqz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api.private-hhwqz.hypershift.local:6443 util.go:395: Waiting for guest kubeconfig host update util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8n2wb/private-hhwqz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:405: guest kubeconfig host is not yet updated, keep polling util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8n2wb/private-hhwqz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:411: kubeconfig host switched from https://api.private-hhwqz.hypershift.local:6443 to https://a279bfe032ec5403289743968c5498bd-86e365293cce30db.elb.us-east-1.amazonaws.com:6443 util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8n2wb/private-hhwqz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://a279bfe032ec5403289743968c5498bd-86e365293cce30db.elb.us-east-1.amazonaws.com:6443 util.go:395: Waiting for guest kubeconfig host update util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8n2wb/private-hhwqz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:405: guest kubeconfig host is not yet updated, keep polling util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8n2wb/private-hhwqz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:411: kubeconfig host switched from https://a279bfe032ec5403289743968c5498bd-86e365293cce30db.elb.us-east-1.amazonaws.com:6443 to https://api.private-hhwqz.hypershift.local:6443 util.go:3224: Successfully waited for HostedCluster e2e-clusters-8n2wb/private-hhwqz to have valid Status.Payload in 0s util.go:1136: skipping test because APIServer is not exposed through a route util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
TestCreateClusterPrivate/EnsureHostedCluster
0s
TestCreateClusterPrivate/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestCreateClusterPrivate/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestCreateClusterPrivate/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
util.go:1136: skipping test because APIServer is not exposed through a route
TestCreateClusterPrivate/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestCreateClusterPrivate/EnsureHostedCluster/EnsureHCPPodsAffinitiesAndTolerations
0s
TestCreateClusterPrivate/EnsureHostedCluster/EnsureNetworkPolicies
0s
TestCreateClusterPrivate/EnsureHostedCluster/EnsureNetworkPolicies/EnsureComponentsHaveNeedManagementKASAccessLabel
0s
TestCreateClusterPrivate/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
TestCreateClusterPrivate/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestCreateClusterPrivate/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestCreateClusterPrivate/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-8n2wb/private-hhwqz to have valid Status.Payload in 0s
TestCreateClusterPrivate/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestCreateClusterPrivate/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestCreateClusterPrivate/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestCreateClusterPrivate/EnsureHostedCluster/EnsureSATokenNotMountedUnlessNecessary
0s
TestCreateClusterPrivate/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
TestCreateClusterPrivate/EnsureHostedCluster/ValidateMetricsAreExposed
0s
TestCreateClusterPrivate/Main
0s
TestCreateClusterPrivate/Main/SwitchFromPrivateToPublic
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8n2wb/private-hhwqz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api.private-hhwqz.hypershift.local:6443 util.go:395: Waiting for guest kubeconfig host update util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8n2wb/private-hhwqz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:405: guest kubeconfig host is not yet updated, keep polling util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8n2wb/private-hhwqz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:411: kubeconfig host switched from https://api.private-hhwqz.hypershift.local:6443 to https://a279bfe032ec5403289743968c5498bd-86e365293cce30db.elb.us-east-1.amazonaws.com:6443
TestCreateClusterPrivate/Main/SwitchFromPublicToPrivate
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8n2wb/private-hhwqz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://a279bfe032ec5403289743968c5498bd-86e365293cce30db.elb.us-east-1.amazonaws.com:6443 util.go:395: Waiting for guest kubeconfig host update util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8n2wb/private-hhwqz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:405: guest kubeconfig host is not yet updated, keep polling util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8n2wb/private-hhwqz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:411: kubeconfig host switched from https://a279bfe032ec5403289743968c5498bd-86e365293cce30db.elb.us-east-1.amazonaws.com:6443 to https://api.private-hhwqz.hypershift.local:6443
TestCreateClusterPrivate/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8n2wb/private-hhwqz in 1m39s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-8n2wb/private-hhwqz to have all of their desired nodes in 11m6s util.go:598: Successfully waited for HostedCluster e2e-clusters-8n2wb/private-hhwqz to rollout in 5m21s util.go:2949: Successfully waited for HostedCluster e2e-clusters-8n2wb/private-hhwqz to have valid conditions in 0s
TestCreateClusterPrivate/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8n2wb/private-hhwqz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterPrivate/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestCreateClusterPrivateWithRouteKAS
25m6.72s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-bn42h/private-kf7vl in 47s journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterPrivateWithRouteKAS/machine-journals fixture.go:341: SUCCESS: found no remaining guest resources hypershift_framework.go:491: Destroyed cluster. Namespace: e2e-clusters-bn42h, name: private-kf7vl hypershift_framework.go:446: archiving /logs/artifacts/TestCreateClusterPrivateWithRouteKAS/hostedcluster-private-kf7vl to /logs/artifacts/TestCreateClusterPrivateWithRouteKAS/hostedcluster.tar.gz util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-bn42h/private-kf7vl in 1m27s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-bn42h/private-kf7vl to have all of their desired nodes in 9m51s util.go:598: Successfully waited for HostedCluster e2e-clusters-bn42h/private-kf7vl to rollout in 3m15s util.go:2949: Successfully waited for HostedCluster e2e-clusters-bn42h/private-kf7vl to have valid conditions in 0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster
3.03s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/ValidateMetricsAreExposed
90ms
TestCreateClusterProxy
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-ql9sl/proxy-bx5wf in 34s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-ql9sl/proxy-bx5wf in 1m36s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-bx5wf.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-proxy-bx5wf.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-bx5wf.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.171.39.94:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-bx5wf.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.95.145.250:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m30.125s util.go:565: Successfully waited for 2 nodes to become ready in 7m51s util.go:598: Successfully waited for HostedCluster e2e-clusters-ql9sl/proxy-bx5wf to rollout in 3m39s util.go:2949: Successfully waited for HostedCluster e2e-clusters-ql9sl/proxy-bx5wf to have valid conditions in 0s util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-ql9sl/proxy-bx5wf-us-east-1a in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-ql9sl/proxy-bx5wf in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-ql9sl/proxy-bx5wf in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-ql9sl/proxy-bx5wf in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:3224: Successfully waited for HostedCluster e2e-clusters-ql9sl/proxy-bx5wf to have valid Status.Payload in 0s util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443 util.go:2527: Checking that all ValidatingAdmissionPolicies are present util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterProxy/EnsureHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-ql9sl/proxy-bx5wf in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureHCPPodsAffinitiesAndTolerations
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureNetworkPolicies
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureNetworkPolicies/EnsureComponentsHaveNeedManagementKASAccessLabel
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
TestCreateClusterProxy/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-ql9sl/proxy-bx5wf to have valid Status.Payload in 0s
TestCreateClusterProxy/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureSATokenNotMountedUnlessNecessary
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesCheckDeniedRequests
0s
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
TestCreateClusterProxy/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesDontBlockStatusModifications
0s
util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterProxy/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesExists
0s
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
TestCreateClusterProxy/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
TestCreateClusterProxy/EnsureHostedCluster/ValidateMetricsAreExposed
0s
TestCreateClusterProxy/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-ql9sl/proxy-bx5wf in 1m36s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-bx5wf.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-proxy-bx5wf.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-bx5wf.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.171.39.94:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-bx5wf.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.95.145.250:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m30.125s util.go:565: Successfully waited for 2 nodes to become ready in 7m51s util.go:598: Successfully waited for HostedCluster e2e-clusters-ql9sl/proxy-bx5wf to rollout in 3m39s util.go:2949: Successfully waited for HostedCluster e2e-clusters-ql9sl/proxy-bx5wf to have valid conditions in 0s
TestCreateClusterProxy/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestCreateClusterProxy/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-ql9sl/proxy-bx5wf in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterProxy/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-ql9sl/proxy-bx5wf in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterProxy/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-ql9sl/proxy-bx5wf-us-east-1a in 25ms
TestCreateClusterProxy/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestCreateClusterProxy/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestCreateClusterRequestServingIsolation
0s
requestserving.go:105: Created request serving nodepool clusters/53cd58f0c19c38590ee5-mgmt-reqserving-9pw4g requestserving.go:105: Created request serving nodepool clusters/53cd58f0c19c38590ee5-mgmt-reqserving-f9s7n requestserving.go:113: Created non request serving nodepool clusters/53cd58f0c19c38590ee5-mgmt-non-reqserving-jsxzj requestserving.go:113: Created non request serving nodepool clusters/53cd58f0c19c38590ee5-mgmt-non-reqserving-gmrk6 requestserving.go:113: Created non request serving nodepool clusters/53cd58f0c19c38590ee5-mgmt-non-reqserving-bz5mk util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/53cd58f0c19c38590ee5-mgmt-reqserving-9pw4g in 3m48s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/53cd58f0c19c38590ee5-mgmt-reqserving-f9s7n in 27s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/53cd58f0c19c38590ee5-mgmt-non-reqserving-jsxzj in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/53cd58f0c19c38590ee5-mgmt-non-reqserving-gmrk6 in 27s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/53cd58f0c19c38590ee5-mgmt-non-reqserving-bz5mk in 39s create_cluster_test.go:2670: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-5lhjx/request-serving-isolation-fvclb in 24s journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterRequestServingIsolation/machine-journals util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5lhjx/request-serving-isolation-fvclb in 1m42s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-fvclb.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-request-serving-isolation-fvclb.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-fvclb.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.160.42:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m7.1s util.go:565: Successfully waited for 3 nodes to become ready in 8m27s util.go:598: Successfully waited for HostedCluster e2e-clusters-5lhjx/request-serving-isolation-fvclb to rollout in 3m51s util.go:2949: Successfully waited for HostedCluster e2e-clusters-5lhjx/request-serving-isolation-fvclb to have valid conditions in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5lhjx/request-serving-isolation-fvclb-us-east-1a in 25ms util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5lhjx/request-serving-isolation-fvclb-us-east-1b in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5lhjx/request-serving-isolation-fvclb-us-east-1c in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5lhjx/request-serving-isolation-fvclb in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5lhjx/request-serving-isolation-fvclb in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5lhjx/request-serving-isolation-fvclb in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5lhjx/request-serving-isolation-fvclb in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:3224: Successfully waited for HostedCluster e2e-clusters-5lhjx/request-serving-isolation-fvclb to have valid Status.Payload in 0s util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443 util.go:2527: Checking that all ValidatingAdmissionPolicies are present util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterRequestServingIsolation/EnsureHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5lhjx/request-serving-isolation-fvclb in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureHCPPodsAffinitiesAndTolerations
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureNetworkPolicies
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureNetworkPolicies/EnsureComponentsHaveNeedManagementKASAccessLabel
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-5lhjx/request-serving-isolation-fvclb to have valid Status.Payload in 0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureSATokenNotMountedUnlessNecessary
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesCheckDeniedRequests
0s
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesDontBlockStatusModifications
0s
util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesExists
0s
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/ValidateMetricsAreExposed
0s
TestCreateClusterRequestServingIsolation/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5lhjx/request-serving-isolation-fvclb in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestCreateClusterRequestServingIsolation/Main/EnsurePSANotPrivileged
0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5lhjx/request-serving-isolation-fvclb in 1m42s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-fvclb.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-request-serving-isolation-fvclb.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-fvclb.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.160.42:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m7.1s util.go:565: Successfully waited for 3 nodes to become ready in 8m27s util.go:598: Successfully waited for HostedCluster e2e-clusters-5lhjx/request-serving-isolation-fvclb to rollout in 3m51s util.go:2949: Successfully waited for HostedCluster e2e-clusters-5lhjx/request-serving-isolation-fvclb to have valid conditions in 0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5lhjx/request-serving-isolation-fvclb in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5lhjx/request-serving-isolation-fvclb in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5lhjx/request-serving-isolation-fvclb-us-east-1a in 25ms util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5lhjx/request-serving-isolation-fvclb-us-east-1b in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5lhjx/request-serving-isolation-fvclb-us-east-1c in 0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestNodePool
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-7wk5x/node-pool-k6mb4 in 48s nodepool_test.go:150: tests only supported on platform KubeVirt hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-dfvmn/node-pool-hfcbz in 42s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dfvmn/node-pool-hfcbz in 1m27s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-hfcbz.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-hfcbz.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-hfcbz.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.197.87.33:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m38.075s util.go:565: Successfully waited for 0 nodes to become ready in 0s util.go:2949: Successfully waited for HostedCluster e2e-clusters-dfvmn/node-pool-hfcbz to have valid conditions in 2m30s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7wk5x/node-pool-k6mb4 in 1m45s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-k6mb4.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-k6mb4.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-k6mb4.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 23.21.191.197:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m25.025s util.go:565: Successfully waited for 0 nodes to become ready in 25ms util.go:2949: Successfully waited for HostedCluster e2e-clusters-7wk5x/node-pool-k6mb4 to have valid conditions in 2m27s util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-dfvmn/node-pool-hfcbz-us-east-1c in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dfvmn/node-pool-hfcbz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dfvmn/node-pool-hfcbz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s nodepool_additionalTrustBundlePropagation_test.go:38: Starting AdditionalTrustBundlePropagationTest. util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-dfvmn/node-pool-hfcbz-test-additional-trust-bundle-propagation in 5m36s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-dfvmn/node-pool-hfcbz-test-additional-trust-bundle-propagation to have correct status in 0s util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-us-east-1b in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7wk5x/node-pool-k6mb4 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7wk5x/node-pool-k6mb4 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s nodepool_kms_root_volume_test.go:42: Starting test KMSRootVolumeTest nodepool_kms_root_volume_test.go:54: retrieved KMS ARN: arn:aws:kms:us-east-1:820196288204:key/d3cdd9e0-3fd1-47a4-a559-72ae3672c5a6 util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-kms-root-volume in 14m48.075s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-kms-root-volume to have correct status in 0s nodepool_kms_root_volume_test.go:85: instanceID: i-015a38bfa73336817 nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-kms-root-volume to have correct status in 0s nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-dwrgk in 10m27s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-dwrgk to have correct status in 3s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-dwrgk to have correct status in 0s nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-k54wm in 18m30s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-k54wm to have correct status in 0s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-k54wm to have correct status in 0s nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-bzxdf in 14m42.1s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-bzxdf to have correct status in 0s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-bzxdf to have correct status in 0s nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. nodepool_test.go:348: NodePool version is outside supported skew, validating condition only (skipping node readiness check) nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-l7mp4 to have correct status in 3s nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-mirrorconfigs in 14m42s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-mirrorconfigs to have correct status in 0s nodepool_mirrorconfigs_test.go:81: Entering MirrorConfigs test nodepool_mirrorconfigs_test.go:111: Hosted control plane namespace is e2e-clusters-7wk5x-node-pool-k6mb4 nodepool_mirrorconfigs_test.go:113: Successfully waited for kubeletConfig should be mirrored and present in the hosted cluster in 3s nodepool_mirrorconfigs_test.go:157: Deleting KubeletConfig configmap reference from nodepool ... nodepool_mirrorconfigs_test.go:163: Successfully waited for KubeletConfig configmap to be deleted in 3s nodepool_mirrorconfigs_test.go:101: Exiting MirrorConfigs test: OK nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-mirrorconfigs to have correct status in 30s nodepool_imagetype_test.go:50: Starting test NodePoolImageTypeTest util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-imagetype in 18m27s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-imagetype to have correct status in 0s nodepool_imagetype_test.go:73: Successfully waited for wait for nodepool e2e-clusters-7wk5x/node-pool-k6mb4-test-imagetype to have a populated PlatformImage status condition in 0s nodepool_imagetype_test.go:102: Windows ImageType test passed - Windows AMI found and validated nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-imagetype to have correct status in 0s nodepool_additionalTrustBundlePropagation_test.go:72: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle nodepool_additionalTrustBundlePropagation_test.go:80: Successfully waited for Waiting for NodePool e2e-clusters-dfvmn/node-pool-hfcbz-test-additional-trust-bundle-propagation to begin updating in 10s nodepool_additionalTrustBundlePropagation_test.go:94: Successfully waited for Waiting for NodePool e2e-clusters-dfvmn/node-pool-hfcbz-test-additional-trust-bundle-propagation to stop updating in 10m20s nodepool_additionalTrustBundlePropagation_test.go:112: Updating hosted cluster by removing additional trust bundle. nodepool_additionalTrustBundlePropagation_test.go:126: Successfully waited for Waiting for control plane operator deployment to be updated in 0s nodepool_additionalTrustBundlePropagation_test.go:147: Successfully waited for Waiting for NodePool e2e-clusters-dfvmn/node-pool-hfcbz-test-additional-trust-bundle-propagation to begin updating in 10s
TestNodePool/HostedCluster0
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-7wk5x/node-pool-k6mb4 in 48s
TestNodePool/HostedCluster0/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7wk5x/node-pool-k6mb4 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster0/Main/KubeVirtCacheTest
0s
nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtJsonPatchTest
0s
nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtNodeMultinetTest
0s
nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtNodeSelectorTest
0s
nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtQoSClassGuaranteedTest
0s
nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/OpenStackAdvancedTest
0s
nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack
TestNodePool/HostedCluster0/Main/TestImageTypes
0s
nodepool_imagetype_test.go:50: Starting test NodePoolImageTypeTest util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-imagetype in 18m27s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-imagetype to have correct status in 0s nodepool_imagetype_test.go:73: Successfully waited for wait for nodepool e2e-clusters-7wk5x/node-pool-k6mb4-test-imagetype to have a populated PlatformImage status condition in 0s nodepool_imagetype_test.go:102: Windows ImageType test passed - Windows AMI found and validated nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-imagetype to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestKMSRootVolumeEncryption
0s
nodepool_kms_root_volume_test.go:42: Starting test KMSRootVolumeTest nodepool_kms_root_volume_test.go:54: retrieved KMS ARN: arn:aws:kms:us-east-1:820196288204:key/d3cdd9e0-3fd1-47a4-a559-72ae3672c5a6 util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-kms-root-volume in 14m48.075s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-kms-root-volume to have correct status in 0s nodepool_kms_root_volume_test.go:85: instanceID: i-015a38bfa73336817 nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-kms-root-volume to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestMirrorConfigs
0s
nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-mirrorconfigs in 14m42s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-mirrorconfigs to have correct status in 0s nodepool_mirrorconfigs_test.go:81: Entering MirrorConfigs test nodepool_mirrorconfigs_test.go:111: Hosted control plane namespace is e2e-clusters-7wk5x-node-pool-k6mb4 nodepool_mirrorconfigs_test.go:113: Successfully waited for kubeletConfig should be mirrored and present in the hosted cluster in 3s nodepool_mirrorconfigs_test.go:157: Deleting KubeletConfig configmap reference from nodepool ... nodepool_mirrorconfigs_test.go:163: Successfully waited for KubeletConfig configmap to be deleted in 3s nodepool_mirrorconfigs_test.go:101: Exiting MirrorConfigs test: OK nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-test-mirrorconfigs to have correct status in 30s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace
0s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut
0s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
TestNodePool/HostedCluster0/Main/TestNTOPerformanceProfile
0s
nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest
TestNodePool/HostedCluster0/Main/TestNodePoolAutoRepair
0s
TestNodePool/HostedCluster0/Main/TestNodePoolDay2Tags
0s
TestNodePool/HostedCluster0/Main/TestNodePoolInPlaceUpgrade
0s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN1
0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-dwrgk in 10m27s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-dwrgk to have correct status in 3s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-dwrgk to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN2
0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-k54wm in 18m30s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-k54wm to have correct status in 0s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-k54wm to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN3
0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-bzxdf in 14m42.1s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-bzxdf to have correct status in 0s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-bzxdf to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN4
0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. nodepool_test.go:348: NodePool version is outside supported skew, validating condition only (skipping node readiness check) nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-l7mp4 to have correct status in 3s
TestNodePool/HostedCluster0/Main/TestNodePoolReplaceUpgrade
0s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout
0s
nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest
TestNodePool/HostedCluster0/Main/TestRollingUpgrade
0s
TestNodePool/HostedCluster0/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7wk5x/node-pool-k6mb4 in 1m45s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-k6mb4.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-k6mb4.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-k6mb4.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 23.21.191.197:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m25.025s util.go:565: Successfully waited for 0 nodes to become ready in 25ms util.go:2949: Successfully waited for HostedCluster e2e-clusters-7wk5x/node-pool-k6mb4 to have valid conditions in 2m27s
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7wk5x/node-pool-k6mb4 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-7wk5x/node-pool-k6mb4-us-east-1b in 25ms
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestNodePool/HostedCluster0/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestNodePool/HostedCluster1
0s
nodepool_test.go:150: tests only supported on platform KubeVirt
TestNodePool/HostedCluster2
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-dfvmn/node-pool-hfcbz in 42s
TestNodePool/HostedCluster2/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dfvmn/node-pool-hfcbz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation
0s
nodepool_additionalTrustBundlePropagation_test.go:38: Starting AdditionalTrustBundlePropagationTest. util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-dfvmn/node-pool-hfcbz-test-additional-trust-bundle-propagation in 5m36s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-dfvmn/node-pool-hfcbz-test-additional-trust-bundle-propagation to have correct status in 0s
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation/AdditionalTrustBundlePropagationTest
0s
nodepool_additionalTrustBundlePropagation_test.go:72: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle nodepool_additionalTrustBundlePropagation_test.go:80: Successfully waited for Waiting for NodePool e2e-clusters-dfvmn/node-pool-hfcbz-test-additional-trust-bundle-propagation to begin updating in 10s nodepool_additionalTrustBundlePropagation_test.go:94: Successfully waited for Waiting for NodePool e2e-clusters-dfvmn/node-pool-hfcbz-test-additional-trust-bundle-propagation to stop updating in 10m20s nodepool_additionalTrustBundlePropagation_test.go:112: Updating hosted cluster by removing additional trust bundle. nodepool_additionalTrustBundlePropagation_test.go:126: Successfully waited for Waiting for control plane operator deployment to be updated in 0s nodepool_additionalTrustBundlePropagation_test.go:147: Successfully waited for Waiting for NodePool e2e-clusters-dfvmn/node-pool-hfcbz-test-additional-trust-bundle-propagation to begin updating in 10s
TestNodePool/HostedCluster2/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dfvmn/node-pool-hfcbz in 1m27s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-hfcbz.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-hfcbz.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-hfcbz.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.197.87.33:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m38.075s util.go:565: Successfully waited for 0 nodes to become ready in 0s util.go:2949: Successfully waited for HostedCluster e2e-clusters-dfvmn/node-pool-hfcbz to have valid conditions in 2m30s
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dfvmn/node-pool-hfcbz in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-dfvmn/node-pool-hfcbz-us-east-1c in 25ms
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestNodePool/HostedCluster2/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestUpgradeControlPlane
0s
control_plane_upgrade_test.go:25: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-p0lv2gxr/release@sha256:b61cb6cdac8ceb723c4c2c0974b20d114626e7ae4bd277be2bd5398b3a2886ec, toImage: registry.build01.ci.openshift.org/ci-op-p0lv2gxr/release@sha256:03e81c806230ad0b09e484ef1bcbd0c6e9243a1b040e172d08bf314be0d79380 hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-pwd2p/control-plane-upgrade-48rrw in 47s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pwd2p/control-plane-upgrade-48rrw in 2m21s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-48rrw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-48rrw.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-48rrw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.29.231:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-48rrw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.147.96.35:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-48rrw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.29.231:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-48rrw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.29.231:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-48rrw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.147.96.35:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-48rrw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.29.231:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 1m30.025s util.go:565: Successfully waited for 2 nodes to become ready in 8m6s util.go:598: Successfully waited for HostedCluster e2e-clusters-pwd2p/control-plane-upgrade-48rrw to rollout in 3m21s util.go:2949: Successfully waited for HostedCluster e2e-clusters-pwd2p/control-plane-upgrade-48rrw to have valid conditions in 1m18s util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-pwd2p/control-plane-upgrade-48rrw-us-east-1a in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pwd2p/control-plane-upgrade-48rrw in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pwd2p/control-plane-upgrade-48rrw in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pwd2p/control-plane-upgrade-48rrw in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-p0lv2gxr/release@sha256:03e81c806230ad0b09e484ef1bcbd0c6e9243a1b040e172d08bf314be0d79380
TestUpgradeControlPlane/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pwd2p/control-plane-upgrade-48rrw in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-p0lv2gxr/release@sha256:03e81c806230ad0b09e484ef1bcbd0c6e9243a1b040e172d08bf314be0d79380
TestUpgradeControlPlane/Main/Wait_for_control_plane_components_to_complete_rollout
0s
TestUpgradeControlPlane/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pwd2p/control-plane-upgrade-48rrw in 2m21s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-48rrw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-48rrw.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-48rrw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.29.231:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-48rrw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.147.96.35:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-48rrw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.29.231:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-48rrw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.29.231:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-48rrw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.147.96.35:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-48rrw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.29.231:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 1m30.025s util.go:565: Successfully waited for 2 nodes to become ready in 8m6s util.go:598: Successfully waited for HostedCluster e2e-clusters-pwd2p/control-plane-upgrade-48rrw to rollout in 3m21s util.go:2949: Successfully waited for HostedCluster e2e-clusters-pwd2p/control-plane-upgrade-48rrw to have valid conditions in 1m18s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pwd2p/control-plane-upgrade-48rrw in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pwd2p/control-plane-upgrade-48rrw in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-pwd2p/control-plane-upgrade-48rrw-us-east-1a in 25ms
TestUpgradeControlPlane/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestUpgradeControlPlane/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster