Failed Tests
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-xjnk4/autoscaling-cwk44 in 27s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 in 1m45s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-cwk44.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-autoscaling-cwk44.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-cwk44.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.54.210.186:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-cwk44.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.54.210.186:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 1m16.45s
util.go:565: Successfully waited for 1 nodes to become ready in 7m39s
util.go:598: Successfully waited for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 to rollout in 3m45s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 to have valid conditions in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-xjnk4/autoscaling-cwk44-us-east-1c in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:565: Successfully waited for 1 nodes to become ready in 0s
autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-xjnk4, name: autoscaling-cwk44-us-east-1c, min: 1, max: 3
autoscaling_test.go:137: Created workload. Node: ip-10-0-6-247.ec2.internal, memcapacity: 14746804Ki
util.go:565: Successfully waited for 3 nodes to become ready in 5m30s
autoscaling_test.go:157: Deleted workload
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:565: Successfully waited for 1 nodes to become ready in 0s
autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-xjnk4, name: autoscaling-cwk44-us-east-1c, min: 1, max: 3
autoscaling_test.go:137: Created workload. Node: ip-10-0-6-247.ec2.internal, memcapacity: 14746804Ki
util.go:565: Successfully waited for 3 nodes to become ready in 5m30s
autoscaling_test.go:157: Deleted workload
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 in 1m45s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-cwk44.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-autoscaling-cwk44.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-cwk44.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.54.210.186:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-cwk44.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.54.210.186:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 1m16.45s
util.go:565: Successfully waited for 1 nodes to become ready in 7m39s
util.go:598: Successfully waited for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 to rollout in 3m45s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 to have valid conditions in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xjnk4/autoscaling-cwk44 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-xjnk4/autoscaling-cwk44-us-east-1c in 25ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
create_cluster_test.go:2492: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-5pggm/create-cluster-6nfwh in 45s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 2m0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.205.202.159:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.201.71.7:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.205.202.159:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.201.71.7:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.205.202.159:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.201.71.7:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 2m0.025s
util.go:565: Successfully waited for 3 nodes to become ready in 8m27s
util.go:598: Successfully waited for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh to rollout in 7m57s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh to have valid conditions in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5pggm/create-cluster-6nfwh-us-east-1a in 25ms
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5pggm/create-cluster-6nfwh-us-east-1b in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5pggm/create-cluster-6nfwh-us-east-1c in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
create_cluster_test.go:2532: fetching mgmt kubeconfig
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
control_plane_pki_operator.go:95: generating new break-glass credentials for more than one signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-5pggm-create-cluster-6nfwh/o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018 to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018" to be approved and signed in 3s
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "tk4rmhri4i9xmmq14jydrfgg6mdqcxfo5xfztwlw2kg" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-5pggm-create-cluster-6nfwh/tk4rmhri4i9xmmq14jydrfgg6mdqcxfo5xfztwlw2kg to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "tk4rmhri4i9xmmq14jydrfgg6mdqcxfo5xfztwlw2kg" to be approved and signed in 3s
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:99: revoking the "customer-break-glass" signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-5pggm-create-cluster-6nfwh/o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018 to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-5pggm-create-cluster-6nfwh/o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018 to complete in 2m48s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:102: ensuring the break-glass credentials from "sre-break-glass" signer still work
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
create_cluster_test.go:2532: fetching mgmt kubeconfig
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
control_plane_pki_operator.go:95: generating new break-glass credentials for more than one signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-5pggm-create-cluster-6nfwh/o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018 to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018" to be approved and signed in 3s
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "tk4rmhri4i9xmmq14jydrfgg6mdqcxfo5xfztwlw2kg" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-5pggm-create-cluster-6nfwh/tk4rmhri4i9xmmq14jydrfgg6mdqcxfo5xfztwlw2kg to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "tk4rmhri4i9xmmq14jydrfgg6mdqcxfo5xfztwlw2kg" to be approved and signed in 3s
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:99: revoking the "customer-break-glass" signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-5pggm-create-cluster-6nfwh/o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018 to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-5pggm-create-cluster-6nfwh/o6y0dlkopgf1r7bwalhot019h9fliz6wgsk9l0nq018 to complete in 2m48s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:102: ensuring the break-glass credentials from "sre-break-glass" signer still work
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 2m0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.205.202.159:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.201.71.7:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.205.202.159:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.201.71.7:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.205.202.159:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-6nfwh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.201.71.7:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 2m0.025s
util.go:565: Successfully waited for 3 nodes to become ready in 8m27s
util.go:598: Successfully waited for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh to rollout in 7m57s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh to have valid conditions in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5pggm/create-cluster-6nfwh in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5pggm/create-cluster-6nfwh-us-east-1a in 25ms
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5pggm/create-cluster-6nfwh-us-east-1b in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5pggm/create-cluster-6nfwh-us-east-1c in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-dpjqv/custom-config-bnbbq in 38s
journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterCustomConfig/machine-journals
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 1m30.025s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.110.237:443: i/o timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 2m23.025s
util.go:565: Successfully waited for 2 nodes to become ready in 7m12s
util.go:598: Successfully waited for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq to rollout in 8m45s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq to have valid conditions in 0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-dpjqv/custom-config-bnbbq-us-east-1a in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
oauth.go:170: Found OAuth route oauth-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com
oauth.go:192: Observed OAuth route oauth-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com to be healthy
oauth.go:151: OAuth token retrieved successfully for user kubeadmin
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
oauth.go:170: Found OAuth route oauth-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com
oauth.go:192: Observed OAuth route oauth-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com to be healthy
oauth.go:151: OAuth token retrieved successfully for user testuser
util.go:3459: Successfully waited for Waiting for service account default/default to be provisioned... in 0s
eventually.go:104: Failed to get *v1.ServiceAccount: serviceaccounts "default" not found
util.go:3482: Successfully waited for Waiting for service account default/test-namespace to be provisioned... in 10s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:3597: Checking that Tuned resource type does not exist in guest cluster
util.go:3610: Checking that Profile resource type does not exist in guest cluster
util.go:3622: Checking that no tuned DaemonSet exists in guest cluster
util.go:3631: Checking that no tuned-related ConfigMaps exist in guest cluster
util.go:3656: NodeTuning capability disabled validation completed successfully
util.go:3937: Updating HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq with custom OVN internal subnets
util.go:3956: Validating CNO conditions on HostedControlPlane
util.go:3958: Successfully waited for HostedControlPlane e2e-clusters-dpjqv-custom-config-bnbbq/custom-config-bnbbq to have healthy CNO conditions in 0s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq to have valid conditions in 0s
util.go:3985: Successfully waited for Network.operator.openshift.io/cluster in guest cluster to reflect the custom subnet changes in 3s
util.go:4015: Successfully waited for Network.config.openshift.io/cluster in guest cluster to be available in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq to have valid Status.Payload in 0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
util.go:3224: Successfully waited for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq to have valid Status.Payload in 0s
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:3937: Updating HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq with custom OVN internal subnets
util.go:3956: Validating CNO conditions on HostedControlPlane
util.go:3958: Successfully waited for HostedControlPlane e2e-clusters-dpjqv-custom-config-bnbbq/custom-config-bnbbq to have healthy CNO conditions in 0s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq to have valid conditions in 0s
util.go:3985: Successfully waited for Network.operator.openshift.io/cluster in guest cluster to reflect the custom subnet changes in 3s
util.go:4015: Successfully waited for Network.config.openshift.io/cluster in guest cluster to be available in 0s
util.go:3459: Successfully waited for Waiting for service account default/default to be provisioned... in 0s
eventually.go:104: Failed to get *v1.ServiceAccount: serviceaccounts "default" not found
util.go:3482: Successfully waited for Waiting for service account default/test-namespace to be provisioned... in 10s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:3597: Checking that Tuned resource type does not exist in guest cluster
util.go:3610: Checking that Profile resource type does not exist in guest cluster
util.go:3622: Checking that no tuned DaemonSet exists in guest cluster
util.go:3631: Checking that no tuned-related ConfigMaps exist in guest cluster
util.go:3656: NodeTuning capability disabled validation completed successfully
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
oauth.go:170: Found OAuth route oauth-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com
oauth.go:192: Observed OAuth route oauth-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com to be healthy
oauth.go:151: OAuth token retrieved successfully for user kubeadmin
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
oauth.go:170: Found OAuth route oauth-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com
oauth.go:192: Observed OAuth route oauth-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com to be healthy
oauth.go:151: OAuth token retrieved successfully for user testuser
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 1m30.025s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-bnbbq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.110.237:443: i/o timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 2m23.025s
util.go:565: Successfully waited for 2 nodes to become ready in 7m12s
util.go:598: Successfully waited for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq to rollout in 8m45s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq to have valid conditions in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dpjqv/custom-config-bnbbq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-dpjqv/custom-config-bnbbq-us-east-1a in 25ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-xk2md/private-x9czt in 50s
journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterPrivate/machine-journals
fixture.go:341: SUCCESS: found no remaining guest resources
hypershift_framework.go:491: Destroyed cluster. Namespace: e2e-clusters-xk2md, name: private-x9czt
hypershift_framework.go:446: archiving /logs/artifacts/TestCreateClusterPrivate/hostedcluster-private-x9czt to /logs/artifacts/TestCreateClusterPrivate/hostedcluster.tar.gz
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-xk2md/private-x9czt in 2m48s
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-xk2md/private-x9czt to have all of their desired nodes in 9m0s
util.go:598: Successfully waited for HostedCluster e2e-clusters-xk2md/private-x9czt to rollout in 4m3s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-xk2md/private-x9czt to have valid conditions in 0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-z4x9x/private-vdmfl in 49s
journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterPrivateWithRouteKAS/machine-journals
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-z4x9x/private-vdmfl in 1m54s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-z4x9x/private-vdmfl to have all of their desired nodes in 9m36s
util.go:598: Successfully waited for HostedCluster e2e-clusters-z4x9x/private-vdmfl to rollout in 4m0s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-z4x9x/private-vdmfl to have valid conditions in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-z4x9x/private-vdmfl in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-z4x9x/private-vdmfl in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com:443
util.go:420: Waiting for guest kubeconfig host to resolve to public address
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:437: kubeconfig host now resolves to public address
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-z4x9x/private-vdmfl in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com:443
util.go:420: Waiting for guest kubeconfig host to resolve to private address
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:432: kubeconfig host now resolves to private address
util.go:3224: Successfully waited for HostedCluster e2e-clusters-z4x9x/private-vdmfl to have valid Status.Payload in 0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
util.go:3224: Successfully waited for HostedCluster e2e-clusters-z4x9x/private-vdmfl to have valid Status.Payload in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-z4x9x/private-vdmfl in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com:443
util.go:420: Waiting for guest kubeconfig host to resolve to public address
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:437: kubeconfig host now resolves to public address
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-z4x9x/private-vdmfl in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com:443
util.go:420: Waiting for guest kubeconfig host to resolve to private address
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-vdmfl.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:432: kubeconfig host now resolves to private address
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-z4x9x/private-vdmfl in 1m54s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-z4x9x/private-vdmfl to have all of their desired nodes in 9m36s
util.go:598: Successfully waited for HostedCluster e2e-clusters-z4x9x/private-vdmfl to rollout in 4m0s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-z4x9x/private-vdmfl to have valid conditions in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-z4x9x/private-vdmfl in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-24pbp/proxy-k6t52 in 36s
journals.go:234: Error copying machine journals to artifacts directory: exit status 1
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-24pbp/proxy-k6t52 in 1m45s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-k6t52.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-proxy-k6t52.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-k6t52.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.221.29.103:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-k6t52.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.212.58.123:443: i/o timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 1m33.025s
util.go:565: Successfully waited for 2 nodes to become ready in 8m9s
util.go:598: Successfully waited for HostedCluster e2e-clusters-24pbp/proxy-k6t52 to rollout in 6m36s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-24pbp/proxy-k6t52 to have valid conditions in 0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-24pbp/proxy-k6t52-us-east-1a in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-24pbp/proxy-k6t52 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-24pbp/proxy-k6t52 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-24pbp/proxy-k6t52 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-24pbp/proxy-k6t52 to have valid Status.Payload in 0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-24pbp/proxy-k6t52 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
util.go:3224: Successfully waited for HostedCluster e2e-clusters-24pbp/proxy-k6t52 to have valid Status.Payload in 0s
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-24pbp/proxy-k6t52 in 1m45s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-k6t52.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-proxy-k6t52.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-k6t52.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.221.29.103:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-k6t52.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.212.58.123:443: i/o timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 1m33.025s
util.go:565: Successfully waited for 2 nodes to become ready in 8m9s
util.go:598: Successfully waited for HostedCluster e2e-clusters-24pbp/proxy-k6t52 to rollout in 6m36s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-24pbp/proxy-k6t52 to have valid conditions in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-24pbp/proxy-k6t52 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-24pbp/proxy-k6t52 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-24pbp/proxy-k6t52-us-east-1a in 25ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
requestserving.go:105: Created request serving nodepool clusters/4f3d44b3c98c7229cf50-mgmt-reqserving-gbt5q
requestserving.go:105: Created request serving nodepool clusters/4f3d44b3c98c7229cf50-mgmt-reqserving-xs5f8
requestserving.go:113: Created non request serving nodepool clusters/4f3d44b3c98c7229cf50-mgmt-non-reqserving-rjhs9
requestserving.go:113: Created non request serving nodepool clusters/4f3d44b3c98c7229cf50-mgmt-non-reqserving-9rk28
requestserving.go:113: Created non request serving nodepool clusters/4f3d44b3c98c7229cf50-mgmt-non-reqserving-wjfwz
util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/4f3d44b3c98c7229cf50-mgmt-reqserving-gbt5q in 3m33s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/4f3d44b3c98c7229cf50-mgmt-reqserving-xs5f8 in 57s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/4f3d44b3c98c7229cf50-mgmt-non-reqserving-rjhs9 in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/4f3d44b3c98c7229cf50-mgmt-non-reqserving-9rk28 in 100ms
util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/4f3d44b3c98c7229cf50-mgmt-non-reqserving-wjfwz in 42s
create_cluster_test.go:2670: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 27s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 2m3s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-pfcwc.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-request-serving-isolation-pfcwc.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-pfcwc.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.44.42.22:443: i/o timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 1m6.15s
util.go:565: Successfully waited for 3 nodes to become ready in 8m18s
util.go:598: Successfully waited for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc to rollout in 6m3s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc to have valid conditions in 30s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-29smm/request-serving-isolation-pfcwc-us-east-1a in 25ms
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-29smm/request-serving-isolation-pfcwc-us-east-1b in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-29smm/request-serving-isolation-pfcwc-us-east-1c in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc to have valid Status.Payload in 0s
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
util.go:3224: Successfully waited for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc to have valid Status.Payload in 0s
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 2m3s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-pfcwc.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-request-serving-isolation-pfcwc.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-pfcwc.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.44.42.22:443: i/o timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 1m6.15s
util.go:565: Successfully waited for 3 nodes to become ready in 8m18s
util.go:598: Successfully waited for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc to rollout in 6m3s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc to have valid conditions in 30s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-29smm/request-serving-isolation-pfcwc in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-29smm/request-serving-isolation-pfcwc-us-east-1a in 25ms
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-29smm/request-serving-isolation-pfcwc-us-east-1b in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-29smm/request-serving-isolation-pfcwc-us-east-1c in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-6s5sx/node-pool-lwnv4 in 26s
nodepool_test.go:150: tests only supported on platform KubeVirt
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-5zhpc/node-pool-jxk2c in 28s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6s5sx/node-pool-lwnv4 in 1m36.025s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-lwnv4.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-lwnv4.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-lwnv4.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.95.159.19:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-lwnv4.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.198.239.251:443: i/o timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 1m30.025s
util.go:565: Successfully waited for 0 nodes to become ready in 0s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-6s5sx/node-pool-lwnv4 to have valid conditions in 2m30s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5zhpc/node-pool-jxk2c in 1m42s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.79.172:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.225.156.83:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.225.156.83:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.79.172:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.225.156.83:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.205.217.52:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.79.172:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 2m9.025s
util.go:565: Successfully waited for 0 nodes to become ready in 25ms
util.go:2949: Successfully waited for HostedCluster e2e-clusters-5zhpc/node-pool-jxk2c to have valid conditions in 2m15s
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-us-east-1b in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6s5sx/node-pool-lwnv4 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6s5sx/node-pool-lwnv4 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
nodepool_kms_root_volume_test.go:42: Starting test KMSRootVolumeTest
nodepool_kms_root_volume_test.go:54: retrieved KMS ARN: arn:aws:kms:us-east-1:820196288204:key/d3cdd9e0-3fd1-47a4-a559-72ae3672c5a6
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-autorepair in 9m54s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-autorepair to have correct status in 0s
nodepool_autorepair_test.go:65: Terminating AWS Instance with a autorepair NodePool
nodepool_autorepair_test.go:70: Terminating AWS instance: i-053093dc280d96f8f
nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-machineconfig in 9m57s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-machineconfig to have correct status in 0s
util.go:474: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-machineconfig to start config update in 15s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-rolling-upgrade in 6m42s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-rolling-upgrade to have correct status in 0s
nodepool_rolling_upgrade_test.go:106: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-rolling-upgrade to start the rolling upgrade in 3s
nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt
nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt
nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt
nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt
nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest
nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack
nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-ntoperformanceprofile in 6m45s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-ntoperformanceprofile to have correct status in 0s
nodepool_nto_performanceprofile_test.go:80: Entering NTO PerformanceProfile test
nodepool_nto_performanceprofile_test.go:110: Hosted control plane namespace is e2e-clusters-6s5sx-node-pool-lwnv4
nodepool_nto_performanceprofile_test.go:112: Successfully waited for performance profile ConfigMap to exist with correct name labels and annotations in 3s
nodepool_nto_performanceprofile_test.go:159: Successfully waited for performance profile status ConfigMap to exist in 0s
nodepool_nto_performanceprofile_test.go:201: Successfully waited for performance profile status to be reflected under the NodePool status in 0s
nodepool_nto_performanceprofile_test.go:254: Deleting configmap reference from nodepool ...
nodepool_nto_performanceprofile_test.go:261: Successfully waited for performance profile ConfigMap to be deleted in 3s
nodepool_nto_performanceprofile_test.go:280: Ending NTO PerformanceProfile test: OK
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-ntoperformanceprofile to have correct status in 30s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-wvn7j in 12m48s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-wvn7j to have correct status in 0s
nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release.
nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-wvn7j to have correct status in 0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_test.go:348: NodePool version is outside supported skew, validating condition only (skipping node readiness check)
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-x5ctv to have correct status in 3s
nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest
nodepool_imagetype_test.go:50: Starting test NodePoolImageTypeTest
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-5zhpc/node-pool-jxk2c-us-east-1c in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5zhpc/node-pool-jxk2c in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5zhpc/node-pool-jxk2c in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-6s5sx/node-pool-lwnv4 in 26s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6s5sx/node-pool-lwnv4 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt
nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt
nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt
nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt
nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt
nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest
nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack
nodepool_imagetype_test.go:50: Starting test NodePoolImageTypeTest
nodepool_kms_root_volume_test.go:42: Starting test KMSRootVolumeTest
nodepool_kms_root_volume_test.go:54: retrieved KMS ARN: arn:aws:kms:us-east-1:820196288204:key/d3cdd9e0-3fd1-47a4-a559-72ae3672c5a6
nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-ntoperformanceprofile in 6m45s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-ntoperformanceprofile to have correct status in 0s
nodepool_nto_performanceprofile_test.go:80: Entering NTO PerformanceProfile test
nodepool_nto_performanceprofile_test.go:110: Hosted control plane namespace is e2e-clusters-6s5sx-node-pool-lwnv4
nodepool_nto_performanceprofile_test.go:112: Successfully waited for performance profile ConfigMap to exist with correct name labels and annotations in 3s
nodepool_nto_performanceprofile_test.go:159: Successfully waited for performance profile status ConfigMap to exist in 0s
nodepool_nto_performanceprofile_test.go:201: Successfully waited for performance profile status to be reflected under the NodePool status in 0s
nodepool_nto_performanceprofile_test.go:254: Deleting configmap reference from nodepool ...
nodepool_nto_performanceprofile_test.go:261: Successfully waited for performance profile ConfigMap to be deleted in 3s
nodepool_nto_performanceprofile_test.go:280: Ending NTO PerformanceProfile test: OK
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-ntoperformanceprofile to have correct status in 30s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-autorepair in 9m54s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-autorepair to have correct status in 0s
nodepool_autorepair_test.go:65: Terminating AWS Instance with a autorepair NodePool
nodepool_autorepair_test.go:70: Terminating AWS instance: i-053093dc280d96f8f
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-wvn7j in 12m48s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-wvn7j to have correct status in 0s
nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release.
nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-wvn7j to have correct status in 0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_test.go:348: NodePool version is outside supported skew, validating condition only (skipping node readiness check)
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-x5ctv to have correct status in 3s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-machineconfig in 9m57s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-machineconfig to have correct status in 0s
util.go:474: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-machineconfig to start config update in 15s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-rolling-upgrade in 6m42s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-rolling-upgrade to have correct status in 0s
nodepool_rolling_upgrade_test.go:106: Successfully waited for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-test-rolling-upgrade to start the rolling upgrade in 3s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6s5sx/node-pool-lwnv4 in 1m36.025s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-lwnv4.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-lwnv4.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-lwnv4.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.95.159.19:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-lwnv4.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.198.239.251:443: i/o timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 1m30.025s
util.go:565: Successfully waited for 0 nodes to become ready in 0s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-6s5sx/node-pool-lwnv4 to have valid conditions in 2m30s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6s5sx/node-pool-lwnv4 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-6s5sx/node-pool-lwnv4-us-east-1b in 25ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
nodepool_test.go:150: tests only supported on platform KubeVirt
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-5zhpc/node-pool-jxk2c in 28s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5zhpc/node-pool-jxk2c in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5zhpc/node-pool-jxk2c in 1m42s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.79.172:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.225.156.83:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.225.156.83:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.79.172:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.225.156.83:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.205.217.52:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-jxk2c.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.79.172:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 2m9.025s
util.go:565: Successfully waited for 0 nodes to become ready in 25ms
util.go:2949: Successfully waited for HostedCluster e2e-clusters-5zhpc/node-pool-jxk2c to have valid conditions in 2m15s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5zhpc/node-pool-jxk2c in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-5zhpc/node-pool-jxk2c-us-east-1c in 25ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
control_plane_upgrade_test.go:25: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-gll2w6iq/release@sha256:1791cec1bd6882825904d2d2c135d668576192bfe610f267741116db9795d984, toImage: registry.build01.ci.openshift.org/ci-op-gll2w6iq/release@sha256:45b9a6649d7f4418c1b97767dc4cd2853b7d412de2db90a974eb319999aa510e
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 22s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 2m0.025s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.239.60.34:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.239.60.34:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.239.60.34:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.239.60.34:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 3m15.025s
util.go:565: Successfully waited for 2 nodes to become ready in 7m21s
util.go:598: Successfully waited for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn to rollout in 3m42s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn to have valid conditions in 0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-czjrc/control-plane-upgrade-frbgn-us-east-1b in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-gll2w6iq/release@sha256:45b9a6649d7f4418c1b97767dc4cd2853b7d412de2db90a974eb319999aa510e
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-gll2w6iq/release@sha256:45b9a6649d7f4418c1b97767dc4cd2853b7d412de2db90a974eb319999aa510e
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 2m0.025s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.239.60.34:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.239.60.34:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.239.60.34:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.6.53.165:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.239.60.34:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-frbgn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 35.168.172.37:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 3m15.025s
util.go:565: Successfully waited for 2 nodes to become ready in 7m21s
util.go:598: Successfully waited for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn to rollout in 3m42s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn to have valid conditions in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-czjrc/control-plane-upgrade-frbgn in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-czjrc/control-plane-upgrade-frbgn-us-east-1b in 25ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster