Failed Tests
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-g62pw/autoscaling-ztpwj in 39s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj in 1m48s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.205.155.97:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.51.145.47:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.205.155.97:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 1m36.05s
util.go:565: Successfully waited for 1 nodes to become ready in 6m39s
util.go:598: Successfully waited for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj to rollout in 3m36s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj to have valid conditions in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-g62pw/autoscaling-ztpwj-us-east-1a in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:565: Successfully waited for 1 nodes to become ready in 0s
autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-g62pw, name: autoscaling-ztpwj-us-east-1a, min: 1, max: 3
autoscaling_test.go:137: Created workload. Node: ip-10-0-13-109.ec2.internal, memcapacity: 14918832Ki
util.go:565: Successfully waited for 3 nodes to become ready in 5m9s
autoscaling_test.go:157: Deleted workload
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:565: Successfully waited for 1 nodes to become ready in 0s
autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-g62pw, name: autoscaling-ztpwj-us-east-1a, min: 1, max: 3
autoscaling_test.go:137: Created workload. Node: ip-10-0-13-109.ec2.internal, memcapacity: 14918832Ki
util.go:565: Successfully waited for 3 nodes to become ready in 5m9s
autoscaling_test.go:157: Deleted workload
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj in 1m48s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.205.155.97:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.51.145.47:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.205.155.97:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 1m36.05s
util.go:565: Successfully waited for 1 nodes to become ready in 6m39s
util.go:598: Successfully waited for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj to rollout in 3m36s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj to have valid conditions in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-g62pw/autoscaling-ztpwj-us-east-1a in 25ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
create_cluster_test.go:2492: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 44s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 1m42s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.20.77.192:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.20.77.192:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.22.101.233:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.20.77.192:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 2m6.025s
util.go:565: Successfully waited for 3 nodes to become ready in 7m36s
util.go:598: Successfully waited for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 to rollout in 3m42s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 to have valid conditions in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-pbp8c/create-cluster-sxfb5-us-east-1a in 25ms
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-pbp8c/create-cluster-sxfb5-us-east-1b in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-pbp8c/create-cluster-sxfb5-us-east-1c in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
create_cluster_test.go:2532: fetching mgmt kubeconfig
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:1928: NodePool replicas: 1, Available nodes: 3
control_plane_pki_operator.go:95: generating new break-glass credentials for more than one signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81 to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81" to be approved and signed in 3s
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "28ikgw0tim6738yxvzu7xd7obgwqlt68giqylh4yq6f" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/28ikgw0tim6738yxvzu7xd7obgwqlt68giqylh4yq6f to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "28ikgw0tim6738yxvzu7xd7obgwqlt68giqylh4yq6f" to be approved and signed in 3s
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:99: revoking the "customer-break-glass" signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-pbp8c-create-cluster-sxfb5/2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81 to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-pbp8c-create-cluster-sxfb5/2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81 to complete in 2m9s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:102: ensuring the break-glass credentials from "sre-break-glass" signer still work
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-pbp8c-create-cluster-sxfb5/customer-system-admin-client-cert-key
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "2pylgj4p42h2kcbmcg1s47dyqjjy8i1skom6o5pyicr1" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/2pylgj4p42h2kcbmcg1s47dyqjjy8i1skom6o5pyicr1 to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "2pylgj4p42h2kcbmcg1s47dyqjjy8i1skom6o5pyicr1" to be approved and signed in 3s
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:168: creating invalid CSR "aw6nmivlm7164xg0ie925e48jyvn5svvc1o2uvjushx" for signer "hypershift.openshift.io/e2e-clusters-pbp8c-create-cluster-sxfb5.customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:178: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/aw6nmivlm7164xg0ie925e48jyvn5svvc1o2uvjushx to trigger automatic approval of the CSR
control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "aw6nmivlm7164xg0ie925e48jyvn5svvc1o2uvjushx" to have invalid CN exposed in status in 3s
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-pbp8c-create-cluster-sxfb5/32v9tne5h02zqgsyjfii19vrayrksg2evets0wp5csp0 to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-pbp8c-create-cluster-sxfb5/32v9tne5h02zqgsyjfii19vrayrksg2evets0wp5csp0 to complete in 2m6s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-pbp8c-create-cluster-sxfb5/sre-system-admin-client-cert-key
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "25luhadjzhedzzhdi26yjahomhvl1g7li2icfpxoyobn" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/25luhadjzhedzzhdi26yjahomhvl1g7li2icfpxoyobn to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "25luhadjzhedzzhdi26yjahomhvl1g7li2icfpxoyobn" to be approved and signed in 3s
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:168: creating invalid CSR "1lfxbd5t8rnmu6y9ablb4iriz9vbbkz4ohawhubibyf5" for signer "hypershift.openshift.io/e2e-clusters-pbp8c-create-cluster-sxfb5.sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:178: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/1lfxbd5t8rnmu6y9ablb4iriz9vbbkz4ohawhubibyf5 to trigger automatic approval of the CSR
control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "1lfxbd5t8rnmu6y9ablb4iriz9vbbkz4ohawhubibyf5" to have invalid CN exposed in status in 3s
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-pbp8c-create-cluster-sxfb5/1hofib8t8jf0bbmel64l0ck5e7xxwlhmgbxd0v64g82i to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-pbp8c-create-cluster-sxfb5/1hofib8t8jf0bbmel64l0ck5e7xxwlhmgbxd0v64g82i to complete in 2m6s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
util.go:169: failed to patch object create-cluster-sxfb5, will retry: HostedCluster.hypershift.openshift.io "create-cluster-sxfb5" is invalid: [spec: Invalid value: "object": Services is immutable. Changes might result in unpredictable and disruptive behavior., spec.services[0].servicePublishingStrategy: Invalid value: "object": nodePort is required when type is NodePort, and forbidden otherwise, spec.services[0].servicePublishingStrategy: Invalid value: "object": only route is allowed when type is Route, and forbidden otherwise]
util.go:169: failed to patch object create-cluster-sxfb5, will retry: HostedCluster.hypershift.openshift.io "create-cluster-sxfb5" is invalid: spec.controllerAvailabilityPolicy: Invalid value: "string": ControllerAvailabilityPolicy is immutable
util.go:169: failed to patch object create-cluster-sxfb5, will retry: HostedCluster.hypershift.openshift.io "create-cluster-sxfb5" is invalid: spec.capabilities: Invalid value: "object": Capabilities is immutable. Changes might result in unpredictable and disruptive behavior.
util.go:2193: Generating custom certificate with DNS name api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com
util.go:2198: Creating custom certificate secret
util.go:2214: Updating hosted cluster with KubeAPIDNSName and KAS custom serving cert
util.go:2250: Getting custom kubeconfig client
util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 3s
util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 0s
util.go:2255: waiting for the KubeAPIDNSName to be reconciled
util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s
util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 0s
util.go:2267: Finding the external name destination for the KAS Service
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2304: Creating a new KAS Service to be used by the external-dns deployment in CI with the custom DNS name api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com
util.go:2326: [2026-01-10T17:10:22Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com
util.go:2326: [2026-01-10T17:10:32Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com
util.go:2326: [2026-01-10T17:10:42Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com
util.go:2326: [2026-01-10T17:10:52Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com
util.go:2326: [2026-01-10T17:11:02Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com
util.go:2331: resolved the custom DNS name after 40.06131509s
util.go:2336: Waiting until the KAS Deployment is ready
util.go:2431: Successfully waited for the KAS custom kubeconfig secret to be deleted from HC Namespace in 5s
util.go:2447: Successfully waited for the KAS custom kubeconfig secret to be deleted from HCP Namespace in 0s
util.go:2477: Successfully waited for the KAS custom kubeconfig status to be removed in 0s
util.go:2510: Deleting custom certificate secret
util.go:2351: Checking CustomAdminKubeconfigStatus are present
util.go:2359: Checking CustomAdminKubeconfigs are present
util.go:2372: Checking CustomAdminKubeconfig reaches the KAS
util.go:2390: Successfully verified custom kubeconfig can reach KAS
util.go:2396: Checking CustomAdminKubeconfig Infrastructure status is updated
util.go:2397: Successfully waited for a successful connection to the custom DNS guest API server in 0s
util.go:2465: Checking CustomAdminKubeconfig are removed
util.go:2494: Checking CustomAdminKubeconfigStatus are removed
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 3 nodes
util.go:2139: DaemonSet ovnkube-node ready: 3/3 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 3 nodes
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 2/3 pods ready
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
create_cluster_test.go:2532: fetching mgmt kubeconfig
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:1928: NodePool replicas: 1, Available nodes: 3
util.go:169: failed to patch object create-cluster-sxfb5, will retry: HostedCluster.hypershift.openshift.io "create-cluster-sxfb5" is invalid: spec.capabilities: Invalid value: "object": Capabilities is immutable. Changes might result in unpredictable and disruptive behavior.
util.go:169: failed to patch object create-cluster-sxfb5, will retry: HostedCluster.hypershift.openshift.io "create-cluster-sxfb5" is invalid: [spec: Invalid value: "object": Services is immutable. Changes might result in unpredictable and disruptive behavior., spec.services[0].servicePublishingStrategy: Invalid value: "object": nodePort is required when type is NodePort, and forbidden otherwise, spec.services[0].servicePublishingStrategy: Invalid value: "object": only route is allowed when type is Route, and forbidden otherwise]
util.go:169: failed to patch object create-cluster-sxfb5, will retry: HostedCluster.hypershift.openshift.io "create-cluster-sxfb5" is invalid: spec.controllerAvailabilityPolicy: Invalid value: "string": ControllerAvailabilityPolicy is immutable
util.go:2193: Generating custom certificate with DNS name api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com
util.go:2198: Creating custom certificate secret
util.go:2214: Updating hosted cluster with KubeAPIDNSName and KAS custom serving cert
util.go:2250: Getting custom kubeconfig client
util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 3s
util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 0s
util.go:2255: waiting for the KubeAPIDNSName to be reconciled
util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s
util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 0s
util.go:2267: Finding the external name destination for the KAS Service
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2304: Creating a new KAS Service to be used by the external-dns deployment in CI with the custom DNS name api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com
util.go:2326: [2026-01-10T17:10:22Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com
util.go:2326: [2026-01-10T17:10:32Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com
util.go:2326: [2026-01-10T17:10:42Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com
util.go:2326: [2026-01-10T17:10:52Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com
util.go:2326: [2026-01-10T17:11:02Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com
util.go:2331: resolved the custom DNS name after 40.06131509s
util.go:2336: Waiting until the KAS Deployment is ready
util.go:2431: Successfully waited for the KAS custom kubeconfig secret to be deleted from HC Namespace in 5s
util.go:2447: Successfully waited for the KAS custom kubeconfig secret to be deleted from HCP Namespace in 0s
util.go:2477: Successfully waited for the KAS custom kubeconfig status to be removed in 0s
util.go:2510: Deleting custom certificate secret
util.go:2359: Checking CustomAdminKubeconfigs are present
util.go:2396: Checking CustomAdminKubeconfig Infrastructure status is updated
util.go:2397: Successfully waited for a successful connection to the custom DNS guest API server in 0s
util.go:2465: Checking CustomAdminKubeconfig are removed
util.go:2372: Checking CustomAdminKubeconfig reaches the KAS
util.go:2390: Successfully verified custom kubeconfig can reach KAS
util.go:2351: Checking CustomAdminKubeconfigStatus are present
util.go:2494: Checking CustomAdminKubeconfigStatus are removed
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 3 nodes
util.go:2139: DaemonSet ovnkube-node ready: 3/3 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 3 nodes
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 2/3 pods ready
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "2pylgj4p42h2kcbmcg1s47dyqjjy8i1skom6o5pyicr1" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/2pylgj4p42h2kcbmcg1s47dyqjjy8i1skom6o5pyicr1 to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "2pylgj4p42h2kcbmcg1s47dyqjjy8i1skom6o5pyicr1" to be approved and signed in 3s
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:168: creating invalid CSR "aw6nmivlm7164xg0ie925e48jyvn5svvc1o2uvjushx" for signer "hypershift.openshift.io/e2e-clusters-pbp8c-create-cluster-sxfb5.customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:178: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/aw6nmivlm7164xg0ie925e48jyvn5svvc1o2uvjushx to trigger automatic approval of the CSR
control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "aw6nmivlm7164xg0ie925e48jyvn5svvc1o2uvjushx" to have invalid CN exposed in status in 3s
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-pbp8c-create-cluster-sxfb5/32v9tne5h02zqgsyjfii19vrayrksg2evets0wp5csp0 to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-pbp8c-create-cluster-sxfb5/32v9tne5h02zqgsyjfii19vrayrksg2evets0wp5csp0 to complete in 2m6s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-pbp8c-create-cluster-sxfb5/customer-system-admin-client-cert-key
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:95: generating new break-glass credentials for more than one signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81 to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81" to be approved and signed in 3s
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "28ikgw0tim6738yxvzu7xd7obgwqlt68giqylh4yq6f" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/28ikgw0tim6738yxvzu7xd7obgwqlt68giqylh4yq6f to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "28ikgw0tim6738yxvzu7xd7obgwqlt68giqylh4yq6f" to be approved and signed in 3s
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:99: revoking the "customer-break-glass" signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-pbp8c-create-cluster-sxfb5/2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81 to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-pbp8c-create-cluster-sxfb5/2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81 to complete in 2m9s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:102: ensuring the break-glass credentials from "sre-break-glass" signer still work
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "25luhadjzhedzzhdi26yjahomhvl1g7li2icfpxoyobn" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/25luhadjzhedzzhdi26yjahomhvl1g7li2icfpxoyobn to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "25luhadjzhedzzhdi26yjahomhvl1g7li2icfpxoyobn" to be approved and signed in 3s
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:168: creating invalid CSR "1lfxbd5t8rnmu6y9ablb4iriz9vbbkz4ohawhubibyf5" for signer "hypershift.openshift.io/e2e-clusters-pbp8c-create-cluster-sxfb5.sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:178: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/1lfxbd5t8rnmu6y9ablb4iriz9vbbkz4ohawhubibyf5 to trigger automatic approval of the CSR
control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "1lfxbd5t8rnmu6y9ablb4iriz9vbbkz4ohawhubibyf5" to have invalid CN exposed in status in 3s
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-pbp8c-create-cluster-sxfb5/1hofib8t8jf0bbmel64l0ck5e7xxwlhmgbxd0v64g82i to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-pbp8c-create-cluster-sxfb5/1hofib8t8jf0bbmel64l0ck5e7xxwlhmgbxd0v64g82i to complete in 2m6s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-pbp8c-create-cluster-sxfb5/sre-system-admin-client-cert-key
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 1m42s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.20.77.192:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.20.77.192:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.22.101.233:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.20.77.192:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 2m6.025s
util.go:565: Successfully waited for 3 nodes to become ready in 7m36s
util.go:598: Successfully waited for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 to rollout in 3m42s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 to have valid conditions in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-pbp8c/create-cluster-sxfb5-us-east-1a in 25ms
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-pbp8c/create-cluster-sxfb5-us-east-1b in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-pbp8c/create-cluster-sxfb5-us-east-1c in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-vckr6/custom-config-lrhnq in 25s
journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterCustomConfig/machine-journals
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 1m33s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.71.25:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.71.25:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.71.25:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 2m42.05s
util.go:565: Successfully waited for 2 nodes to become ready in 7m21s
util.go:598: Successfully waited for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq to rollout in 7m24s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq to have valid conditions in 0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-vckr6/custom-config-lrhnq-us-east-1b in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
oauth.go:170: Found OAuth route oauth-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com
oauth.go:192: Observed OAuth route oauth-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com to be healthy
oauth.go:151: OAuth token retrieved successfully for user kubeadmin
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
oauth.go:170: Found OAuth route oauth-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com
oauth.go:192: Observed OAuth route oauth-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com to be healthy
oauth.go:151: OAuth token retrieved successfully for user testuser
util.go:3459: Successfully waited for Waiting for service account default/default to be provisioned... in 0s
eventually.go:104: Failed to get *v1.ServiceAccount: serviceaccounts "default" not found
util.go:3482: Successfully waited for Waiting for service account default/test-namespace to be provisioned... in 10s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:3597: Checking that Tuned resource type does not exist in guest cluster
util.go:3610: Checking that Profile resource type does not exist in guest cluster
util.go:3622: Checking that no tuned DaemonSet exists in guest cluster
util.go:3631: Checking that no tuned-related ConfigMaps exist in guest cluster
util.go:3656: NodeTuning capability disabled validation completed successfully
util.go:3937: Updating HostedCluster e2e-clusters-vckr6/custom-config-lrhnq with custom OVN internal subnets
util.go:3956: Validating CNO conditions on HostedControlPlane
util.go:3958: Successfully waited for HostedControlPlane e2e-clusters-vckr6-custom-config-lrhnq/custom-config-lrhnq to have healthy CNO conditions in 0s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq to have valid conditions in 0s
util.go:3985: Successfully waited for Network.operator.openshift.io/cluster in guest cluster to reflect the custom subnet changes in 3s
util.go:4015: Successfully waited for Network.config.openshift.io/cluster in guest cluster to be available in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq to have valid Status.Payload in 0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
util.go:3224: Successfully waited for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq to have valid Status.Payload in 0s
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:3937: Updating HostedCluster e2e-clusters-vckr6/custom-config-lrhnq with custom OVN internal subnets
util.go:3956: Validating CNO conditions on HostedControlPlane
util.go:3958: Successfully waited for HostedControlPlane e2e-clusters-vckr6-custom-config-lrhnq/custom-config-lrhnq to have healthy CNO conditions in 0s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq to have valid conditions in 0s
util.go:3985: Successfully waited for Network.operator.openshift.io/cluster in guest cluster to reflect the custom subnet changes in 3s
util.go:4015: Successfully waited for Network.config.openshift.io/cluster in guest cluster to be available in 0s
util.go:3459: Successfully waited for Waiting for service account default/default to be provisioned... in 0s
eventually.go:104: Failed to get *v1.ServiceAccount: serviceaccounts "default" not found
util.go:3482: Successfully waited for Waiting for service account default/test-namespace to be provisioned... in 10s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:3597: Checking that Tuned resource type does not exist in guest cluster
util.go:3610: Checking that Profile resource type does not exist in guest cluster
util.go:3622: Checking that no tuned DaemonSet exists in guest cluster
util.go:3631: Checking that no tuned-related ConfigMaps exist in guest cluster
util.go:3656: NodeTuning capability disabled validation completed successfully
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
oauth.go:170: Found OAuth route oauth-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com
oauth.go:192: Observed OAuth route oauth-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com to be healthy
oauth.go:151: OAuth token retrieved successfully for user kubeadmin
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
oauth.go:170: Found OAuth route oauth-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com
oauth.go:192: Observed OAuth route oauth-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com to be healthy
oauth.go:151: OAuth token retrieved successfully for user testuser
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 1m33s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.71.25:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.71.25:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.71.25:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 2m42.05s
util.go:565: Successfully waited for 2 nodes to become ready in 7m21s
util.go:598: Successfully waited for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq to rollout in 7m24s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq to have valid conditions in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-vckr6/custom-config-lrhnq-us-east-1b in 25ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-w2csv/private-68cb2 in 1m0s
journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterPrivate/machine-journals
fixture.go:341: SUCCESS: found no remaining guest resources
hypershift_framework.go:491: Destroyed cluster. Namespace: e2e-clusters-w2csv, name: private-68cb2
hypershift_framework.go:446: archiving /logs/artifacts/TestCreateClusterPrivate/hostedcluster-private-68cb2 to /logs/artifacts/TestCreateClusterPrivate/hostedcluster.tar.gz
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-w2csv/private-68cb2 in 1m48s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-w2csv/private-68cb2 to have all of their desired nodes in 9m33s
util.go:598: Successfully waited for HostedCluster e2e-clusters-w2csv/private-68cb2 to rollout in 5m0s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-w2csv/private-68cb2 to have valid conditions in 0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-lngc7/private-tbh7c in 34s
journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterPrivateWithRouteKAS/machine-journals
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lngc7/private-tbh7c in 1m30s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-lngc7/private-tbh7c to have all of their desired nodes in 10m24s
util.go:598: Successfully waited for HostedCluster e2e-clusters-lngc7/private-tbh7c to rollout in 8m54s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-lngc7/private-tbh7c to have valid conditions in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lngc7/private-tbh7c in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lngc7/private-tbh7c in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com:443
util.go:420: Waiting for guest kubeconfig host to resolve to public address
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:437: kubeconfig host now resolves to public address
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lngc7/private-tbh7c in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com:443
util.go:420: Waiting for guest kubeconfig host to resolve to private address
util.go:432: kubeconfig host now resolves to private address
util.go:3224: Successfully waited for HostedCluster e2e-clusters-lngc7/private-tbh7c to have valid Status.Payload in 0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
util.go:3224: Successfully waited for HostedCluster e2e-clusters-lngc7/private-tbh7c to have valid Status.Payload in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lngc7/private-tbh7c in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com:443
util.go:420: Waiting for guest kubeconfig host to resolve to public address
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
util.go:437: kubeconfig host now resolves to public address
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lngc7/private-tbh7c in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com:443
util.go:420: Waiting for guest kubeconfig host to resolve to private address
util.go:432: kubeconfig host now resolves to private address
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lngc7/private-tbh7c in 1m30s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-lngc7/private-tbh7c to have all of their desired nodes in 10m24s
util.go:598: Successfully waited for HostedCluster e2e-clusters-lngc7/private-tbh7c to rollout in 8m54s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-lngc7/private-tbh7c to have valid conditions in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lngc7/private-tbh7c in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-8q8kw/proxy-htmqk in 51s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8q8kw/proxy-htmqk in 1m36s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.194.171.117:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.194.171.117:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.192.250.62:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.194.171.117:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 2m9.025s
util.go:565: Successfully waited for 2 nodes to become ready in 7m24s
util.go:598: Successfully waited for HostedCluster e2e-clusters-8q8kw/proxy-htmqk to rollout in 3m51s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-8q8kw/proxy-htmqk to have valid conditions in 0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-8q8kw/proxy-htmqk-us-east-1a in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8q8kw/proxy-htmqk in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8q8kw/proxy-htmqk in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8q8kw/proxy-htmqk in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-8q8kw/proxy-htmqk to have valid Status.Payload in 0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8q8kw/proxy-htmqk in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
util.go:3224: Successfully waited for HostedCluster e2e-clusters-8q8kw/proxy-htmqk to have valid Status.Payload in 0s
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8q8kw/proxy-htmqk in 1m36s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.194.171.117:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.194.171.117:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.192.250.62:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.194.171.117:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 2m9.025s
util.go:565: Successfully waited for 2 nodes to become ready in 7m24s
util.go:598: Successfully waited for HostedCluster e2e-clusters-8q8kw/proxy-htmqk to rollout in 3m51s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-8q8kw/proxy-htmqk to have valid conditions in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8q8kw/proxy-htmqk in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8q8kw/proxy-htmqk in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-8q8kw/proxy-htmqk-us-east-1a in 25ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
requestserving.go:105: Created request serving nodepool clusters/f22fe2c12880412a01ae-mgmt-reqserving-tpt6x
requestserving.go:105: Created request serving nodepool clusters/f22fe2c12880412a01ae-mgmt-reqserving-8kzx8
requestserving.go:113: Created non request serving nodepool clusters/f22fe2c12880412a01ae-mgmt-non-reqserving-xfwkc
requestserving.go:113: Created non request serving nodepool clusters/f22fe2c12880412a01ae-mgmt-non-reqserving-bqfgl
requestserving.go:113: Created non request serving nodepool clusters/f22fe2c12880412a01ae-mgmt-non-reqserving-cr4qv
util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/f22fe2c12880412a01ae-mgmt-reqserving-tpt6x in 3m12s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/f22fe2c12880412a01ae-mgmt-reqserving-8kzx8 in 51s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/f22fe2c12880412a01ae-mgmt-non-reqserving-xfwkc in 30s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/f22fe2c12880412a01ae-mgmt-non-reqserving-bqfgl in 21s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/f22fe2c12880412a01ae-mgmt-non-reqserving-cr4qv in 3s
create_cluster_test.go:2670: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 15s
journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterRequestServingIsolation/machine-journals
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 1m24s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-vfb5z.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-request-serving-isolation-vfb5z.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-vfb5z.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.80.33.228:443: i/o timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 1m29.025s
util.go:565: Successfully waited for 3 nodes to become ready in 8m24s
util.go:598: Successfully waited for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z to rollout in 4m54s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z to have valid conditions in 57s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-2fvtt/request-serving-isolation-vfb5z-us-east-1a in 25ms
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-2fvtt/request-serving-isolation-vfb5z-us-east-1b in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-2fvtt/request-serving-isolation-vfb5z-us-east-1c in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z to have valid Status.Payload in 0s
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
util.go:3224: Successfully waited for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z to have valid Status.Payload in 0s
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 1m24s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-vfb5z.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-request-serving-isolation-vfb5z.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-vfb5z.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.80.33.228:443: i/o timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 1m29.025s
util.go:565: Successfully waited for 3 nodes to become ready in 8m24s
util.go:598: Successfully waited for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z to rollout in 4m54s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z to have valid conditions in 57s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-2fvtt/request-serving-isolation-vfb5z-us-east-1a in 25ms
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-2fvtt/request-serving-isolation-vfb5z-us-east-1b in 0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-2fvtt/request-serving-isolation-vfb5z-us-east-1c in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-hbvxj/node-pool-x745j in 20s
nodepool_test.go:150: tests only supported on platform KubeVirt
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-j8mz9/node-pool-7hngb in 33s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-hbvxj/node-pool-x745j in 1m36s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-x745j.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-x745j.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-x745j.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.157.109.219:443: i/o timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 1m39.125s
util.go:565: Successfully waited for 0 nodes to become ready in 0s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-hbvxj/node-pool-x745j to have valid conditions in 2m24s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-j8mz9/node-pool-7hngb in 1m33s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-7hngb.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-7hngb.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-7hngb.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.51.7:443: i/o timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 1m57.025s
util.go:565: Successfully waited for 0 nodes to become ready in 0s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-j8mz9/node-pool-7hngb to have valid conditions in 2m12s
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-us-east-1b in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-hbvxj/node-pool-x745j in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-hbvxj/node-pool-x745j in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
nodepool_kms_root_volume_test.go:42: Starting test KMSRootVolumeTest
nodepool_kms_root_volume_test.go:54: retrieved KMS ARN: arn:aws:kms:us-east-1:820196288204:key/d3cdd9e0-3fd1-47a4-a559-72ae3672c5a6
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-kms-root-volume in 6m54s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-kms-root-volume to have correct status in 3s
nodepool_kms_root_volume_test.go:85: instanceID: i-06e377a2deb77cc57
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-kms-root-volume to have correct status in 0s
nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-inplaceupgrade in 10m27s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-inplaceupgrade to have correct status in 0s
nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints
nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-inplaceupgrade to have version 4.22.0-0.ci-2026-01-09-005312 in 0s
nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:d9262e3822e86ae5cd8dda21c94eaf92837e08c6551862951c81d82adab8fd96
nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-inplaceupgrade to start the upgrade in 3s
nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-rolling-upgrade in 10m30s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-rolling-upgrade to have correct status in 0s
nodepool_rolling_upgrade_test.go:106: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-rolling-upgrade to start the rolling upgrade in 3s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-day2-tags in 8m12s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-day2-tags to have correct status in 0s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-day2-tags to have correct status in 0s
nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt
nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt
nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt
nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt
nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest
nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack
nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-ntoperformanceprofile in 16m0s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-ntoperformanceprofile to have correct status in 0s
nodepool_nto_performanceprofile_test.go:80: Entering NTO PerformanceProfile test
nodepool_nto_performanceprofile_test.go:110: Hosted control plane namespace is e2e-clusters-hbvxj-node-pool-x745j
nodepool_nto_performanceprofile_test.go:112: Successfully waited for performance profile ConfigMap to exist with correct name labels and annotations in 3s
nodepool_nto_performanceprofile_test.go:159: Successfully waited for performance profile status ConfigMap to exist in 0s
nodepool_nto_performanceprofile_test.go:201: Successfully waited for performance profile status to be reflected under the NodePool status in 0s
nodepool_nto_performanceprofile_test.go:254: Deleting configmap reference from nodepool ...
nodepool_nto_performanceprofile_test.go:261: Successfully waited for performance profile ConfigMap to be deleted in 3s
nodepool_nto_performanceprofile_test.go:280: Ending NTO PerformanceProfile test: OK
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-ntoperformanceprofile to have correct status in 30s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-tlcz2 in 12m57s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-tlcz2 to have correct status in 0s
nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release.
nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-tlcz2 to have correct status in 0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-xfvrj in 16m3s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-xfvrj to have correct status in 0s
nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release.
nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-xfvrj to have correct status in 0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_test.go:348: NodePool version is outside supported skew, validating condition only (skipping node readiness check)
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-vhdwd to have correct status in 3s
nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-mirrorconfigs in 12m54s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-mirrorconfigs to have correct status in 0s
nodepool_mirrorconfigs_test.go:81: Entering MirrorConfigs test
nodepool_mirrorconfigs_test.go:111: Hosted control plane namespace is e2e-clusters-hbvxj-node-pool-x745j
nodepool_mirrorconfigs_test.go:113: Successfully waited for kubeletConfig should be mirrored and present in the hosted cluster in 3s
nodepool_mirrorconfigs_test.go:157: Deleting KubeletConfig configmap reference from nodepool ...
nodepool_mirrorconfigs_test.go:163: Successfully waited for KubeletConfig configmap to be deleted in 3s
nodepool_mirrorconfigs_test.go:101: Exiting MirrorConfigs test: OK
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-mirrorconfigs to have correct status in 27s
nodepool_imagetype_test.go:50: Starting test NodePoolImageTypeTest
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-j8mz9/node-pool-7hngb-us-east-1c in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-j8mz9/node-pool-7hngb in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-j8mz9/node-pool-7hngb in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
nodepool_additionalTrustBundlePropagation_test.go:38: Starting AdditionalTrustBundlePropagationTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-j8mz9/node-pool-7hngb-test-additional-trust-bundle-propagation in 5m33s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-j8mz9/node-pool-7hngb-test-additional-trust-bundle-propagation to have correct status in 0s
nodepool_additionalTrustBundlePropagation_test.go:72: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle
nodepool_additionalTrustBundlePropagation_test.go:80: Successfully waited for Waiting for NodePool e2e-clusters-j8mz9/node-pool-7hngb-test-additional-trust-bundle-propagation to begin updating in 10s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-hbvxj/node-pool-x745j in 20s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-hbvxj/node-pool-x745j in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt
nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt
nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt
nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt
nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt
nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest
nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack
nodepool_imagetype_test.go:50: Starting test NodePoolImageTypeTest
nodepool_kms_root_volume_test.go:42: Starting test KMSRootVolumeTest
nodepool_kms_root_volume_test.go:54: retrieved KMS ARN: arn:aws:kms:us-east-1:820196288204:key/d3cdd9e0-3fd1-47a4-a559-72ae3672c5a6
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-kms-root-volume in 6m54s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-kms-root-volume to have correct status in 3s
nodepool_kms_root_volume_test.go:85: instanceID: i-06e377a2deb77cc57
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-kms-root-volume to have correct status in 0s
nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-mirrorconfigs in 12m54s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-mirrorconfigs to have correct status in 0s
nodepool_mirrorconfigs_test.go:81: Entering MirrorConfigs test
nodepool_mirrorconfigs_test.go:111: Hosted control plane namespace is e2e-clusters-hbvxj-node-pool-x745j
nodepool_mirrorconfigs_test.go:113: Successfully waited for kubeletConfig should be mirrored and present in the hosted cluster in 3s
nodepool_mirrorconfigs_test.go:157: Deleting KubeletConfig configmap reference from nodepool ...
nodepool_mirrorconfigs_test.go:163: Successfully waited for KubeletConfig configmap to be deleted in 3s
nodepool_mirrorconfigs_test.go:101: Exiting MirrorConfigs test: OK
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-mirrorconfigs to have correct status in 27s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-ntoperformanceprofile in 16m0s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-ntoperformanceprofile to have correct status in 0s
nodepool_nto_performanceprofile_test.go:80: Entering NTO PerformanceProfile test
nodepool_nto_performanceprofile_test.go:110: Hosted control plane namespace is e2e-clusters-hbvxj-node-pool-x745j
nodepool_nto_performanceprofile_test.go:112: Successfully waited for performance profile ConfigMap to exist with correct name labels and annotations in 3s
nodepool_nto_performanceprofile_test.go:159: Successfully waited for performance profile status ConfigMap to exist in 0s
nodepool_nto_performanceprofile_test.go:201: Successfully waited for performance profile status to be reflected under the NodePool status in 0s
nodepool_nto_performanceprofile_test.go:254: Deleting configmap reference from nodepool ...
nodepool_nto_performanceprofile_test.go:261: Successfully waited for performance profile ConfigMap to be deleted in 3s
nodepool_nto_performanceprofile_test.go:280: Ending NTO PerformanceProfile test: OK
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-ntoperformanceprofile to have correct status in 30s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-day2-tags in 8m12s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-day2-tags to have correct status in 0s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-day2-tags to have correct status in 0s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-inplaceupgrade in 10m27s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-inplaceupgrade to have correct status in 0s
nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints
nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-inplaceupgrade to have version 4.22.0-0.ci-2026-01-09-005312 in 0s
nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:d9262e3822e86ae5cd8dda21c94eaf92837e08c6551862951c81d82adab8fd96
nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-inplaceupgrade to start the upgrade in 3s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-tlcz2 in 12m57s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-tlcz2 to have correct status in 0s
nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release.
nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-tlcz2 to have correct status in 0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-xfvrj in 16m3s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-xfvrj to have correct status in 0s
nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release.
nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-xfvrj to have correct status in 0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_test.go:348: NodePool version is outside supported skew, validating condition only (skipping node readiness check)
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-vhdwd to have correct status in 3s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-rolling-upgrade in 10m30s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-rolling-upgrade to have correct status in 0s
nodepool_rolling_upgrade_test.go:106: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-rolling-upgrade to start the rolling upgrade in 3s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-hbvxj/node-pool-x745j in 1m36s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-x745j.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-x745j.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-x745j.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.157.109.219:443: i/o timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 1m39.125s
util.go:565: Successfully waited for 0 nodes to become ready in 0s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-hbvxj/node-pool-x745j to have valid conditions in 2m24s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-hbvxj/node-pool-x745j in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-us-east-1b in 25ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
nodepool_test.go:150: tests only supported on platform KubeVirt
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-j8mz9/node-pool-7hngb in 33s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-j8mz9/node-pool-7hngb in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
nodepool_additionalTrustBundlePropagation_test.go:38: Starting AdditionalTrustBundlePropagationTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-j8mz9/node-pool-7hngb-test-additional-trust-bundle-propagation in 5m33s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-j8mz9/node-pool-7hngb-test-additional-trust-bundle-propagation to have correct status in 0s
nodepool_additionalTrustBundlePropagation_test.go:72: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle
nodepool_additionalTrustBundlePropagation_test.go:80: Successfully waited for Waiting for NodePool e2e-clusters-j8mz9/node-pool-7hngb-test-additional-trust-bundle-propagation to begin updating in 10s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-j8mz9/node-pool-7hngb in 1m33s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-7hngb.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-7hngb.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-7hngb.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.51.7:443: i/o timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 1m57.025s
util.go:565: Successfully waited for 0 nodes to become ready in 0s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-j8mz9/node-pool-7hngb to have valid conditions in 2m12s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-j8mz9/node-pool-7hngb in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-j8mz9/node-pool-7hngb-us-east-1c in 25ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
control_plane_upgrade_test.go:25: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:51100f0e7a6c69f210772cfeb63281be86f29af18a520a7b139846380ff5a4aa, toImage: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:d9262e3822e86ae5cd8dda21c94eaf92837e08c6551862951c81d82adab8fd96
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 48s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 1m51s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.134.213:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.134.213:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.134.213:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.134.213:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 3m3.025s
util.go:565: Successfully waited for 2 nodes to become ready in 7m42s
util.go:598: Successfully waited for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh to rollout in 4m3s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh to have valid conditions in 33s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-8dfwz/control-plane-upgrade-2wxhh-us-east-1c in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:d9262e3822e86ae5cd8dda21c94eaf92837e08c6551862951c81d82adab8fd96
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:363: Successfully waited for a successful connection to the guest API server in 0s
control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:d9262e3822e86ae5cd8dda21c94eaf92837e08c6551862951c81d82adab8fd96
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 1m51s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: i/o timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.134.213:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.134.213:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.134.213:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.134.213:443: connect: connection refused
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused
util.go:363: Successfully waited for a successful connection to the guest API server in 3m3.025s
util.go:565: Successfully waited for 2 nodes to become ready in 7m42s
util.go:598: Successfully waited for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh to rollout in 4m3s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh to have valid conditions in 33s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 0s
util.go:301: Successfully waited for kubeconfig secret to have data in 0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-8dfwz/control-plane-upgrade-2wxhh-us-east-1c in 25ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster