PR #7448 - 01-10 16:18

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

311
Total Tests
64
Passed
228
Failed
19
Skipped

Failed Tests

TestAutoscaling
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-g62pw/autoscaling-ztpwj in 39s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj in 1m48s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.205.155.97:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.51.145.47:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.205.155.97:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 1m36.05s util.go:565: Successfully waited for 1 nodes to become ready in 6m39s util.go:598: Successfully waited for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj to rollout in 3m36s util.go:2949: Successfully waited for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj to have valid conditions in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-g62pw/autoscaling-ztpwj-us-east-1a in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:565: Successfully waited for 1 nodes to become ready in 0s autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-g62pw, name: autoscaling-ztpwj-us-east-1a, min: 1, max: 3 autoscaling_test.go:137: Created workload. Node: ip-10-0-13-109.ec2.internal, memcapacity: 14918832Ki util.go:565: Successfully waited for 3 nodes to become ready in 5m9s autoscaling_test.go:157: Deleted workload
TestAutoscaling/Main
0s
TestAutoscaling/Main/TestAutoscaling
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:565: Successfully waited for 1 nodes to become ready in 0s autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-g62pw, name: autoscaling-ztpwj-us-east-1a, min: 1, max: 3 autoscaling_test.go:137: Created workload. Node: ip-10-0-13-109.ec2.internal, memcapacity: 14918832Ki util.go:565: Successfully waited for 3 nodes to become ready in 5m9s autoscaling_test.go:157: Deleted workload
TestAutoscaling/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj in 1m48s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.205.155.97:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.51.145.47:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztpwj.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.205.155.97:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 1m36.05s util.go:565: Successfully waited for 1 nodes to become ready in 6m39s util.go:598: Successfully waited for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj to rollout in 3m36s util.go:2949: Successfully waited for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj to have valid conditions in 0s
TestAutoscaling/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestAutoscaling/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestAutoscaling/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g62pw/autoscaling-ztpwj in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestAutoscaling/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-g62pw/autoscaling-ztpwj-us-east-1a in 25ms
TestAutoscaling/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestAutoscaling/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestCreateCluster
0s
create_cluster_test.go:2492: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 44s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 1m42s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.20.77.192:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.20.77.192:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.22.101.233:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.20.77.192:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 2m6.025s util.go:565: Successfully waited for 3 nodes to become ready in 7m36s util.go:598: Successfully waited for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 to rollout in 3m42s util.go:2949: Successfully waited for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 to have valid conditions in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-pbp8c/create-cluster-sxfb5-us-east-1a in 25ms util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-pbp8c/create-cluster-sxfb5-us-east-1b in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-pbp8c/create-cluster-sxfb5-us-east-1c in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s create_cluster_test.go:2532: fetching mgmt kubeconfig util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:1928: NodePool replicas: 1, Available nodes: 3 control_plane_pki_operator.go:95: generating new break-glass credentials for more than one signer pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81" for signer "customer-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81 to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "28ikgw0tim6738yxvzu7xd7obgwqlt68giqylh4yq6f" for signer "sre-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/28ikgw0tim6738yxvzu7xd7obgwqlt68giqylh4yq6f to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "28ikgw0tim6738yxvzu7xd7obgwqlt68giqylh4yq6f" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username control_plane_pki_operator.go:99: revoking the "customer-break-glass" signer pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:256: creating CRR e2e-clusters-pbp8c-create-cluster-sxfb5/2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81 to trigger signer certificate revocation control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-pbp8c-create-cluster-sxfb5/2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81 to complete in 2m9s control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server control_plane_pki_operator.go:102: ensuring the break-glass credentials from "sre-break-glass" signer still work pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-pbp8c-create-cluster-sxfb5/customer-system-admin-client-cert-key control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "2pylgj4p42h2kcbmcg1s47dyqjjy8i1skom6o5pyicr1" for signer "customer-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/2pylgj4p42h2kcbmcg1s47dyqjjy8i1skom6o5pyicr1 to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "2pylgj4p42h2kcbmcg1s47dyqjjy8i1skom6o5pyicr1" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:168: creating invalid CSR "aw6nmivlm7164xg0ie925e48jyvn5svvc1o2uvjushx" for signer "hypershift.openshift.io/e2e-clusters-pbp8c-create-cluster-sxfb5.customer-break-glass", requesting client auth usages control_plane_pki_operator.go:178: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/aw6nmivlm7164xg0ie925e48jyvn5svvc1o2uvjushx to trigger automatic approval of the CSR control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "aw6nmivlm7164xg0ie925e48jyvn5svvc1o2uvjushx" to have invalid CN exposed in status in 3s pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:256: creating CRR e2e-clusters-pbp8c-create-cluster-sxfb5/32v9tne5h02zqgsyjfii19vrayrksg2evets0wp5csp0 to trigger signer certificate revocation control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-pbp8c-create-cluster-sxfb5/32v9tne5h02zqgsyjfii19vrayrksg2evets0wp5csp0 to complete in 2m6s control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-pbp8c-create-cluster-sxfb5/sre-system-admin-client-cert-key control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "25luhadjzhedzzhdi26yjahomhvl1g7li2icfpxoyobn" for signer "sre-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/25luhadjzhedzzhdi26yjahomhvl1g7li2icfpxoyobn to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "25luhadjzhedzzhdi26yjahomhvl1g7li2icfpxoyobn" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:168: creating invalid CSR "1lfxbd5t8rnmu6y9ablb4iriz9vbbkz4ohawhubibyf5" for signer "hypershift.openshift.io/e2e-clusters-pbp8c-create-cluster-sxfb5.sre-break-glass", requesting client auth usages control_plane_pki_operator.go:178: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/1lfxbd5t8rnmu6y9ablb4iriz9vbbkz4ohawhubibyf5 to trigger automatic approval of the CSR control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "1lfxbd5t8rnmu6y9ablb4iriz9vbbkz4ohawhubibyf5" to have invalid CN exposed in status in 3s pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:256: creating CRR e2e-clusters-pbp8c-create-cluster-sxfb5/1hofib8t8jf0bbmel64l0ck5e7xxwlhmgbxd0v64g82i to trigger signer certificate revocation control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-pbp8c-create-cluster-sxfb5/1hofib8t8jf0bbmel64l0ck5e7xxwlhmgbxd0v64g82i to complete in 2m6s control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server util.go:169: failed to patch object create-cluster-sxfb5, will retry: HostedCluster.hypershift.openshift.io "create-cluster-sxfb5" is invalid: [spec: Invalid value: "object": Services is immutable. Changes might result in unpredictable and disruptive behavior., spec.services[0].servicePublishingStrategy: Invalid value: "object": nodePort is required when type is NodePort, and forbidden otherwise, spec.services[0].servicePublishingStrategy: Invalid value: "object": only route is allowed when type is Route, and forbidden otherwise] util.go:169: failed to patch object create-cluster-sxfb5, will retry: HostedCluster.hypershift.openshift.io "create-cluster-sxfb5" is invalid: spec.controllerAvailabilityPolicy: Invalid value: "string": ControllerAvailabilityPolicy is immutable util.go:169: failed to patch object create-cluster-sxfb5, will retry: HostedCluster.hypershift.openshift.io "create-cluster-sxfb5" is invalid: spec.capabilities: Invalid value: "object": Capabilities is immutable. Changes might result in unpredictable and disruptive behavior. util.go:2193: Generating custom certificate with DNS name api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com util.go:2198: Creating custom certificate secret util.go:2214: Updating hosted cluster with KubeAPIDNSName and KAS custom serving cert util.go:2250: Getting custom kubeconfig client util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 3s util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 0s util.go:2255: waiting for the KubeAPIDNSName to be reconciled util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 0s util.go:2267: Finding the external name destination for the KAS Service util.go:2292: service custom DNS name not found, using the control plane endpoint util.go:2292: service custom DNS name not found, using the control plane endpoint util.go:2292: service custom DNS name not found, using the control plane endpoint util.go:2304: Creating a new KAS Service to be used by the external-dns deployment in CI with the custom DNS name api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com util.go:2326: [2026-01-10T17:10:22Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com util.go:2326: [2026-01-10T17:10:32Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com util.go:2326: [2026-01-10T17:10:42Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com util.go:2326: [2026-01-10T17:10:52Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com util.go:2326: [2026-01-10T17:11:02Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com util.go:2331: resolved the custom DNS name after 40.06131509s util.go:2336: Waiting until the KAS Deployment is ready util.go:2431: Successfully waited for the KAS custom kubeconfig secret to be deleted from HC Namespace in 5s util.go:2447: Successfully waited for the KAS custom kubeconfig secret to be deleted from HCP Namespace in 0s util.go:2477: Successfully waited for the KAS custom kubeconfig status to be removed in 0s util.go:2510: Deleting custom certificate secret util.go:2351: Checking CustomAdminKubeconfigStatus are present util.go:2359: Checking CustomAdminKubeconfigs are present util.go:2372: Checking CustomAdminKubeconfig reaches the KAS util.go:2390: Successfully verified custom kubeconfig can reach KAS util.go:2396: Checking CustomAdminKubeconfig Infrastructure status is updated util.go:2397: Successfully waited for a successful connection to the custom DNS guest API server in 0s util.go:2465: Checking CustomAdminKubeconfig are removed util.go:2494: Checking CustomAdminKubeconfigStatus are removed util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 3 nodes util.go:2139: DaemonSet ovnkube-node ready: 3/3 pods util.go:2147: ✓ ovnkube-node DaemonSet is ready util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 3 nodes util.go:2135: DaemonSet global-pull-secret-syncer not ready: 2/3 pods ready
TestCreateCluster/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s create_cluster_test.go:2532: fetching mgmt kubeconfig util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:1928: NodePool replicas: 1, Available nodes: 3
TestCreateCluster/Main/Check_if_GlobalPullSecret_secret_is_in_the_right_place_at_Dataplane
0s
TestCreateCluster/Main/Check_if_the_DaemonSet_is_present_in_the_DataPlane
0s
TestCreateCluster/Main/EnsureAppLabel
0s
TestCreateCluster/Main/EnsureCAPIFinalizers
0s
TestCreateCluster/Main/EnsureCustomLabels
0s
TestCreateCluster/Main/EnsureCustomTolerations
0s
TestCreateCluster/Main/EnsureDefaultSecurityGroupTags
0s
TestCreateCluster/Main/EnsureFeatureGateStatus
0s
TestCreateCluster/Main/EnsureHostedClusterCapabilitiesImmutability
0s
util.go:169: failed to patch object create-cluster-sxfb5, will retry: HostedCluster.hypershift.openshift.io "create-cluster-sxfb5" is invalid: spec.capabilities: Invalid value: "object": Capabilities is immutable. Changes might result in unpredictable and disruptive behavior.
TestCreateCluster/Main/EnsureHostedClusterImmutability
0s
util.go:169: failed to patch object create-cluster-sxfb5, will retry: HostedCluster.hypershift.openshift.io "create-cluster-sxfb5" is invalid: [spec: Invalid value: "object": Services is immutable. Changes might result in unpredictable and disruptive behavior., spec.services[0].servicePublishingStrategy: Invalid value: "object": nodePort is required when type is NodePort, and forbidden otherwise, spec.services[0].servicePublishingStrategy: Invalid value: "object": only route is allowed when type is Route, and forbidden otherwise] util.go:169: failed to patch object create-cluster-sxfb5, will retry: HostedCluster.hypershift.openshift.io "create-cluster-sxfb5" is invalid: spec.controllerAvailabilityPolicy: Invalid value: "string": ControllerAvailabilityPolicy is immutable
TestCreateCluster/Main/EnsureKubeAPIDNSNameCustomCert
0s
util.go:2193: Generating custom certificate with DNS name api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com util.go:2198: Creating custom certificate secret util.go:2214: Updating hosted cluster with KubeAPIDNSName and KAS custom serving cert util.go:2250: Getting custom kubeconfig client util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 3s util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 0s util.go:2255: waiting for the KubeAPIDNSName to be reconciled util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 0s util.go:2267: Finding the external name destination for the KAS Service util.go:2292: service custom DNS name not found, using the control plane endpoint util.go:2292: service custom DNS name not found, using the control plane endpoint util.go:2292: service custom DNS name not found, using the control plane endpoint util.go:2304: Creating a new KAS Service to be used by the external-dns deployment in CI with the custom DNS name api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com util.go:2326: [2026-01-10T17:10:22Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com util.go:2326: [2026-01-10T17:10:32Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com util.go:2326: [2026-01-10T17:10:42Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com util.go:2326: [2026-01-10T17:10:52Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com util.go:2326: [2026-01-10T17:11:02Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com util.go:2331: resolved the custom DNS name after 40.06131509s util.go:2336: Waiting until the KAS Deployment is ready util.go:2431: Successfully waited for the KAS custom kubeconfig secret to be deleted from HC Namespace in 5s util.go:2447: Successfully waited for the KAS custom kubeconfig secret to be deleted from HCP Namespace in 0s util.go:2477: Successfully waited for the KAS custom kubeconfig status to be removed in 0s util.go:2510: Deleting custom certificate secret
TestCreateCluster/Main/EnsureKubeAPIDNSNameCustomCert/EnsureCustomAdminKubeconfigExists
0s
util.go:2359: Checking CustomAdminKubeconfigs are present
TestCreateCluster/Main/EnsureKubeAPIDNSNameCustomCert/EnsureCustomAdminKubeconfigInfraStatusIsUpdated
0s
util.go:2396: Checking CustomAdminKubeconfig Infrastructure status is updated util.go:2397: Successfully waited for a successful connection to the custom DNS guest API server in 0s
TestCreateCluster/Main/EnsureKubeAPIDNSNameCustomCert/EnsureCustomAdminKubeconfigIsRemoved
0s
util.go:2465: Checking CustomAdminKubeconfig are removed
TestCreateCluster/Main/EnsureKubeAPIDNSNameCustomCert/EnsureCustomAdminKubeconfigReachesTheKAS
0s
util.go:2372: Checking CustomAdminKubeconfig reaches the KAS util.go:2390: Successfully verified custom kubeconfig can reach KAS
TestCreateCluster/Main/EnsureKubeAPIDNSNameCustomCert/EnsureCustomAdminKubeconfigStatusExists
0s
util.go:2351: Checking CustomAdminKubeconfigStatus are present
TestCreateCluster/Main/EnsureKubeAPIDNSNameCustomCert/EnsureCustomAdminKubeconfigStatusIsRemoved
0s
util.go:2494: Checking CustomAdminKubeconfigStatus are removed
TestCreateCluster/Main/Wait_for_critical_DaemonSets_to_be_ready_-_first_check
0s
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 3 nodes util.go:2139: DaemonSet ovnkube-node ready: 3/3 pods util.go:2147: ✓ ovnkube-node DaemonSet is ready util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 3 nodes util.go:2135: DaemonSet global-pull-secret-syncer not ready: 2/3 pods ready
TestCreateCluster/Main/break-glass-credentials
0s
TestCreateCluster/Main/break-glass-credentials/customer-break-glass
0s
TestCreateCluster/Main/break-glass-credentials/customer-break-glass/CSR_flow
0s
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "2pylgj4p42h2kcbmcg1s47dyqjjy8i1skom6o5pyicr1" for signer "customer-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/2pylgj4p42h2kcbmcg1s47dyqjjy8i1skom6o5pyicr1 to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "2pylgj4p42h2kcbmcg1s47dyqjjy8i1skom6o5pyicr1" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/Main/break-glass-credentials/customer-break-glass/CSR_flow/invalid_CN_flagged_in_status
0s
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:168: creating invalid CSR "aw6nmivlm7164xg0ie925e48jyvn5svvc1o2uvjushx" for signer "hypershift.openshift.io/e2e-clusters-pbp8c-create-cluster-sxfb5.customer-break-glass", requesting client auth usages control_plane_pki_operator.go:178: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/aw6nmivlm7164xg0ie925e48jyvn5svvc1o2uvjushx to trigger automatic approval of the CSR control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "aw6nmivlm7164xg0ie925e48jyvn5svvc1o2uvjushx" to have invalid CN exposed in status in 3s
TestCreateCluster/Main/break-glass-credentials/customer-break-glass/CSR_flow/revocation
0s
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:256: creating CRR e2e-clusters-pbp8c-create-cluster-sxfb5/32v9tne5h02zqgsyjfii19vrayrksg2evets0wp5csp0 to trigger signer certificate revocation control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-pbp8c-create-cluster-sxfb5/32v9tne5h02zqgsyjfii19vrayrksg2evets0wp5csp0 to complete in 2m6s control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
TestCreateCluster/Main/break-glass-credentials/customer-break-glass/direct_fetch
0s
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-pbp8c-create-cluster-sxfb5/customer-system-admin-client-cert-key control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/Main/break-glass-credentials/independent_signers
0s
control_plane_pki_operator.go:95: generating new break-glass credentials for more than one signer pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81" for signer "customer-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81 to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "28ikgw0tim6738yxvzu7xd7obgwqlt68giqylh4yq6f" for signer "sre-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/28ikgw0tim6738yxvzu7xd7obgwqlt68giqylh4yq6f to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "28ikgw0tim6738yxvzu7xd7obgwqlt68giqylh4yq6f" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username control_plane_pki_operator.go:99: revoking the "customer-break-glass" signer pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:256: creating CRR e2e-clusters-pbp8c-create-cluster-sxfb5/2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81 to trigger signer certificate revocation control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-pbp8c-create-cluster-sxfb5/2fbcfhrjy2g20rxj56a6dveuixomz66dnhfs4lal5y81 to complete in 2m9s control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server control_plane_pki_operator.go:102: ensuring the break-glass credentials from "sre-break-glass" signer still work pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/Main/break-glass-credentials/sre-break-glass
0s
TestCreateCluster/Main/break-glass-credentials/sre-break-glass/CSR_flow
0s
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:204: creating CSR "25luhadjzhedzzhdi26yjahomhvl1g7li2icfpxoyobn" for signer "sre-break-glass", requesting client auth usages control_plane_pki_operator.go:214: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/25luhadjzhedzzhdi26yjahomhvl1g7li2icfpxoyobn to trigger automatic approval of the CSR control_plane_pki_operator.go:221: Successfully waited for CSR "25luhadjzhedzzhdi26yjahomhvl1g7li2icfpxoyobn" to be approved and signed in 3s control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/Main/break-glass-credentials/sre-break-glass/CSR_flow/invalid_CN_flagged_in_status
0s
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:168: creating invalid CSR "1lfxbd5t8rnmu6y9ablb4iriz9vbbkz4ohawhubibyf5" for signer "hypershift.openshift.io/e2e-clusters-pbp8c-create-cluster-sxfb5.sre-break-glass", requesting client auth usages control_plane_pki_operator.go:178: creating CSRA e2e-clusters-pbp8c-create-cluster-sxfb5/1lfxbd5t8rnmu6y9ablb4iriz9vbbkz4ohawhubibyf5 to trigger automatic approval of the CSR control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "1lfxbd5t8rnmu6y9ablb4iriz9vbbkz4ohawhubibyf5" to have invalid CN exposed in status in 3s
TestCreateCluster/Main/break-glass-credentials/sre-break-glass/CSR_flow/revocation
0s
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:256: creating CRR e2e-clusters-pbp8c-create-cluster-sxfb5/1hofib8t8jf0bbmel64l0ck5e7xxwlhmgbxd0v64g82i to trigger signer certificate revocation control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-pbp8c-create-cluster-sxfb5/1hofib8t8jf0bbmel64l0ck5e7xxwlhmgbxd0v64g82i to complete in 2m6s control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
TestCreateCluster/Main/break-glass-credentials/sre-break-glass/direct_fetch
0s
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-pbp8c-create-cluster-sxfb5/sre-system-admin-client-cert-key control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 1m42s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.20.77.192:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.20.77.192:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.22.101.233:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.20.77.192:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-sxfb5.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.246.134:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 2m6.025s util.go:565: Successfully waited for 3 nodes to become ready in 7m36s util.go:598: Successfully waited for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 to rollout in 3m42s util.go:2949: Successfully waited for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 to have valid conditions in 0s
TestCreateCluster/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestCreateCluster/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateCluster/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pbp8c/create-cluster-sxfb5 in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateCluster/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-pbp8c/create-cluster-sxfb5-us-east-1a in 25ms util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-pbp8c/create-cluster-sxfb5-us-east-1b in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-pbp8c/create-cluster-sxfb5-us-east-1c in 0s
TestCreateCluster/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestCreateCluster/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestCreateClusterCustomConfig
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-vckr6/custom-config-lrhnq in 25s journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterCustomConfig/machine-journals util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 1m33s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.71.25:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.71.25:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.71.25:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 2m42.05s util.go:565: Successfully waited for 2 nodes to become ready in 7m21s util.go:598: Successfully waited for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq to rollout in 7m24s util.go:2949: Successfully waited for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq to have valid conditions in 0s util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-vckr6/custom-config-lrhnq-us-east-1b in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s oauth.go:170: Found OAuth route oauth-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com oauth.go:192: Observed OAuth route oauth-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com to be healthy oauth.go:151: OAuth token retrieved successfully for user kubeadmin util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s oauth.go:170: Found OAuth route oauth-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com oauth.go:192: Observed OAuth route oauth-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com to be healthy oauth.go:151: OAuth token retrieved successfully for user testuser util.go:3459: Successfully waited for Waiting for service account default/default to be provisioned... in 0s eventually.go:104: Failed to get *v1.ServiceAccount: serviceaccounts "default" not found util.go:3482: Successfully waited for Waiting for service account default/test-namespace to be provisioned... in 10s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:3597: Checking that Tuned resource type does not exist in guest cluster util.go:3610: Checking that Profile resource type does not exist in guest cluster util.go:3622: Checking that no tuned DaemonSet exists in guest cluster util.go:3631: Checking that no tuned-related ConfigMaps exist in guest cluster util.go:3656: NodeTuning capability disabled validation completed successfully util.go:3937: Updating HostedCluster e2e-clusters-vckr6/custom-config-lrhnq with custom OVN internal subnets util.go:3956: Validating CNO conditions on HostedControlPlane util.go:3958: Successfully waited for HostedControlPlane e2e-clusters-vckr6-custom-config-lrhnq/custom-config-lrhnq to have healthy CNO conditions in 0s util.go:2949: Successfully waited for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq to have valid conditions in 0s util.go:3985: Successfully waited for Network.operator.openshift.io/cluster in guest cluster to reflect the custom subnet changes in 3s util.go:4015: Successfully waited for Network.config.openshift.io/cluster in guest cluster to be available in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:3224: Successfully waited for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq to have valid Status.Payload in 0s util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443 util.go:2527: Checking that all ValidatingAdmissionPolicies are present util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterCustomConfig/EnsureHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureHCPPodsAffinitiesAndTolerations
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureNetworkPolicies
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureNetworkPolicies/EnsureComponentsHaveNeedManagementKASAccessLabel
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq to have valid Status.Payload in 0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureSATokenNotMountedUnlessNecessary
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesCheckDeniedRequests
0s
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesDontBlockStatusModifications
0s
util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesExists
0s
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
TestCreateClusterCustomConfig/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
TestCreateClusterCustomConfig/EnsureHostedCluster/ValidateMetricsAreExposed
0s
TestCreateClusterCustomConfig/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterCustomConfig/Main/EnsureCNOOperatorConfiguration
0s
util.go:3937: Updating HostedCluster e2e-clusters-vckr6/custom-config-lrhnq with custom OVN internal subnets util.go:3956: Validating CNO conditions on HostedControlPlane util.go:3958: Successfully waited for HostedControlPlane e2e-clusters-vckr6-custom-config-lrhnq/custom-config-lrhnq to have healthy CNO conditions in 0s util.go:2949: Successfully waited for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq to have valid conditions in 0s util.go:3985: Successfully waited for Network.operator.openshift.io/cluster in guest cluster to reflect the custom subnet changes in 3s util.go:4015: Successfully waited for Network.config.openshift.io/cluster in guest cluster to be available in 0s
TestCreateClusterCustomConfig/Main/EnsureConsoleCapabilityDisabled
0s
TestCreateClusterCustomConfig/Main/EnsureImageRegistryCapabilityDisabled
0s
util.go:3459: Successfully waited for Waiting for service account default/default to be provisioned... in 0s eventually.go:104: Failed to get *v1.ServiceAccount: serviceaccounts "default" not found util.go:3482: Successfully waited for Waiting for service account default/test-namespace to be provisioned... in 10s
TestCreateClusterCustomConfig/Main/EnsureIngressCapabilityDisabled
0s
TestCreateClusterCustomConfig/Main/EnsureInsightsCapabilityDisabled
0s
TestCreateClusterCustomConfig/Main/EnsureNodeTuningCapabilityDisabled
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:3597: Checking that Tuned resource type does not exist in guest cluster util.go:3610: Checking that Profile resource type does not exist in guest cluster util.go:3622: Checking that no tuned DaemonSet exists in guest cluster util.go:3631: Checking that no tuned-related ConfigMaps exist in guest cluster util.go:3656: NodeTuning capability disabled validation completed successfully
TestCreateClusterCustomConfig/Main/EnsureOAuthWithIdentityProvider
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s oauth.go:170: Found OAuth route oauth-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com oauth.go:192: Observed OAuth route oauth-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com to be healthy oauth.go:151: OAuth token retrieved successfully for user kubeadmin util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s oauth.go:170: Found OAuth route oauth-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com oauth.go:192: Observed OAuth route oauth-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com to be healthy oauth.go:151: OAuth token retrieved successfully for user testuser
TestCreateClusterCustomConfig/Main/EnsureOpenshiftSamplesCapabilityDisabled
0s
TestCreateClusterCustomConfig/Main/EnsureSecretEncryptedUsingKMSV2
0s
TestCreateClusterCustomConfig/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 1m33s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.71.25:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.71.25:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.87.224.188:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.196.235.16:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrhnq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.71.25:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 2m42.05s util.go:565: Successfully waited for 2 nodes to become ready in 7m21s util.go:598: Successfully waited for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq to rollout in 7m24s util.go:2949: Successfully waited for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq to have valid conditions in 0s
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-vckr6/custom-config-lrhnq in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-vckr6/custom-config-lrhnq-us-east-1b in 25ms
TestCreateClusterCustomConfig/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestCreateClusterCustomConfig/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestCreateClusterPrivate
25m4.25s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-w2csv/private-68cb2 in 1m0s journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterPrivate/machine-journals fixture.go:341: SUCCESS: found no remaining guest resources hypershift_framework.go:491: Destroyed cluster. Namespace: e2e-clusters-w2csv, name: private-68cb2 hypershift_framework.go:446: archiving /logs/artifacts/TestCreateClusterPrivate/hostedcluster-private-68cb2 to /logs/artifacts/TestCreateClusterPrivate/hostedcluster.tar.gz util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-w2csv/private-68cb2 in 1m48s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-w2csv/private-68cb2 to have all of their desired nodes in 9m33s util.go:598: Successfully waited for HostedCluster e2e-clusters-w2csv/private-68cb2 to rollout in 5m0s util.go:2949: Successfully waited for HostedCluster e2e-clusters-w2csv/private-68cb2 to have valid conditions in 0s
TestCreateClusterPrivate/EnsureHostedCluster
2.6s
TestCreateClusterPrivate/EnsureHostedCluster/ValidateMetricsAreExposed
90ms
TestCreateClusterPrivateWithRouteKAS
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-lngc7/private-tbh7c in 34s journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterPrivateWithRouteKAS/machine-journals util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lngc7/private-tbh7c in 1m30s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-lngc7/private-tbh7c to have all of their desired nodes in 10m24s util.go:598: Successfully waited for HostedCluster e2e-clusters-lngc7/private-tbh7c to rollout in 8m54s util.go:2949: Successfully waited for HostedCluster e2e-clusters-lngc7/private-tbh7c to have valid conditions in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lngc7/private-tbh7c in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lngc7/private-tbh7c in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com:443 util.go:420: Waiting for guest kubeconfig host to resolve to public address util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:437: kubeconfig host now resolves to public address util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lngc7/private-tbh7c in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com:443 util.go:420: Waiting for guest kubeconfig host to resolve to private address util.go:432: kubeconfig host now resolves to private address util.go:3224: Successfully waited for HostedCluster e2e-clusters-lngc7/private-tbh7c to have valid Status.Payload in 0s util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureHCPPodsAffinitiesAndTolerations
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureNetworkPolicies
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureNetworkPolicies/EnsureComponentsHaveNeedManagementKASAccessLabel
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-lngc7/private-tbh7c to have valid Status.Payload in 0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/EnsureSATokenNotMountedUnlessNecessary
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
TestCreateClusterPrivateWithRouteKAS/EnsureHostedCluster/ValidateMetricsAreExposed
0s
TestCreateClusterPrivateWithRouteKAS/Main
0s
TestCreateClusterPrivateWithRouteKAS/Main/SwitchFromPrivateToPublic
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lngc7/private-tbh7c in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com:443 util.go:420: Waiting for guest kubeconfig host to resolve to public address util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:426: failed to resolve guest kubeconfig host: lookup api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:437: kubeconfig host now resolves to public address
TestCreateClusterPrivateWithRouteKAS/Main/SwitchFromPublicToPrivate
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lngc7/private-tbh7c in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s create_cluster_test.go:2909: Found guest kubeconfig host before switching endpoint access: https://api-private-tbh7c.service.ci.hypershift.devcluster.openshift.com:443 util.go:420: Waiting for guest kubeconfig host to resolve to private address util.go:432: kubeconfig host now resolves to private address
TestCreateClusterPrivateWithRouteKAS/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lngc7/private-tbh7c in 1m30s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-lngc7/private-tbh7c to have all of their desired nodes in 10m24s util.go:598: Successfully waited for HostedCluster e2e-clusters-lngc7/private-tbh7c to rollout in 8m54s util.go:2949: Successfully waited for HostedCluster e2e-clusters-lngc7/private-tbh7c to have valid conditions in 0s
TestCreateClusterPrivateWithRouteKAS/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lngc7/private-tbh7c in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterPrivateWithRouteKAS/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestCreateClusterProxy
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-8q8kw/proxy-htmqk in 51s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8q8kw/proxy-htmqk in 1m36s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.194.171.117:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.194.171.117:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.192.250.62:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.194.171.117:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 2m9.025s util.go:565: Successfully waited for 2 nodes to become ready in 7m24s util.go:598: Successfully waited for HostedCluster e2e-clusters-8q8kw/proxy-htmqk to rollout in 3m51s util.go:2949: Successfully waited for HostedCluster e2e-clusters-8q8kw/proxy-htmqk to have valid conditions in 0s util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-8q8kw/proxy-htmqk-us-east-1a in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8q8kw/proxy-htmqk in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8q8kw/proxy-htmqk in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8q8kw/proxy-htmqk in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:3224: Successfully waited for HostedCluster e2e-clusters-8q8kw/proxy-htmqk to have valid Status.Payload in 0s util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443 util.go:2527: Checking that all ValidatingAdmissionPolicies are present util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterProxy/EnsureHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8q8kw/proxy-htmqk in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureHCPPodsAffinitiesAndTolerations
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureNetworkPolicies
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureNetworkPolicies/EnsureComponentsHaveNeedManagementKASAccessLabel
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
TestCreateClusterProxy/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-8q8kw/proxy-htmqk to have valid Status.Payload in 0s
TestCreateClusterProxy/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureSATokenNotMountedUnlessNecessary
0s
TestCreateClusterProxy/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesCheckDeniedRequests
0s
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
TestCreateClusterProxy/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesDontBlockStatusModifications
0s
util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterProxy/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesExists
0s
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
TestCreateClusterProxy/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
TestCreateClusterProxy/EnsureHostedCluster/ValidateMetricsAreExposed
0s
TestCreateClusterProxy/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8q8kw/proxy-htmqk in 1m36s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.194.171.117:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.194.171.117:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.192.250.62:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-htmqk.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.194.171.117:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 2m9.025s util.go:565: Successfully waited for 2 nodes to become ready in 7m24s util.go:598: Successfully waited for HostedCluster e2e-clusters-8q8kw/proxy-htmqk to rollout in 3m51s util.go:2949: Successfully waited for HostedCluster e2e-clusters-8q8kw/proxy-htmqk to have valid conditions in 0s
TestCreateClusterProxy/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestCreateClusterProxy/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8q8kw/proxy-htmqk in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterProxy/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8q8kw/proxy-htmqk in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterProxy/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-8q8kw/proxy-htmqk-us-east-1a in 25ms
TestCreateClusterProxy/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestCreateClusterProxy/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestCreateClusterRequestServingIsolation
0s
requestserving.go:105: Created request serving nodepool clusters/f22fe2c12880412a01ae-mgmt-reqserving-tpt6x requestserving.go:105: Created request serving nodepool clusters/f22fe2c12880412a01ae-mgmt-reqserving-8kzx8 requestserving.go:113: Created non request serving nodepool clusters/f22fe2c12880412a01ae-mgmt-non-reqserving-xfwkc requestserving.go:113: Created non request serving nodepool clusters/f22fe2c12880412a01ae-mgmt-non-reqserving-bqfgl requestserving.go:113: Created non request serving nodepool clusters/f22fe2c12880412a01ae-mgmt-non-reqserving-cr4qv util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/f22fe2c12880412a01ae-mgmt-reqserving-tpt6x in 3m12s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/f22fe2c12880412a01ae-mgmt-reqserving-8kzx8 in 51s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/f22fe2c12880412a01ae-mgmt-non-reqserving-xfwkc in 30s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/f22fe2c12880412a01ae-mgmt-non-reqserving-bqfgl in 21s util.go:565: Successfully waited for 1 nodes to become ready for NodePool clusters/f22fe2c12880412a01ae-mgmt-non-reqserving-cr4qv in 3s create_cluster_test.go:2670: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 15s journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterRequestServingIsolation/machine-journals util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 1m24s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-vfb5z.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-request-serving-isolation-vfb5z.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-vfb5z.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.80.33.228:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m29.025s util.go:565: Successfully waited for 3 nodes to become ready in 8m24s util.go:598: Successfully waited for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z to rollout in 4m54s util.go:2949: Successfully waited for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z to have valid conditions in 57s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-2fvtt/request-serving-isolation-vfb5z-us-east-1a in 25ms util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-2fvtt/request-serving-isolation-vfb5z-us-east-1b in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-2fvtt/request-serving-isolation-vfb5z-us-east-1c in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s util.go:3224: Successfully waited for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z to have valid Status.Payload in 0s util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443 util.go:2527: Checking that all ValidatingAdmissionPolicies are present util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterRequestServingIsolation/EnsureHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureHCPPodsAffinitiesAndTolerations
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureNetworkPolicies
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureNetworkPolicies/EnsureComponentsHaveNeedManagementKASAccessLabel
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
0s
util.go:1181: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:3224: Successfully waited for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z to have valid Status.Payload in 0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureSATokenNotMountedUnlessNecessary
0s
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesCheckDeniedRequests
0s
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesDontBlockStatusModifications
0s
util.go:2569: Checking ClusterOperator status modifications are allowed
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesExists
0s
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 6 node(s) didn't match Pod's node affinity/selector. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:822: error: non-fatal, observed FailedScheduling or Preempted event: 0/8 nodes are available: 2 node(s) had untolerated taint {hypershift.openshift.io/request-serving-component: true}, 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
TestCreateClusterRequestServingIsolation/EnsureHostedCluster/ValidateMetricsAreExposed
0s
TestCreateClusterRequestServingIsolation/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestCreateClusterRequestServingIsolation/Main/EnsurePSANotPrivileged
0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 1m24s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-vfb5z.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-request-serving-isolation-vfb5z.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-vfb5z.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.80.33.228:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m29.025s util.go:565: Successfully waited for 3 nodes to become ready in 8m24s util.go:598: Successfully waited for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z to rollout in 4m54s util.go:2949: Successfully waited for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z to have valid conditions in 57s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2fvtt/request-serving-isolation-vfb5z in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-2fvtt/request-serving-isolation-vfb5z-us-east-1a in 25ms util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-2fvtt/request-serving-isolation-vfb5z-us-east-1b in 0s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-2fvtt/request-serving-isolation-vfb5z-us-east-1c in 0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestNodePool
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-hbvxj/node-pool-x745j in 20s nodepool_test.go:150: tests only supported on platform KubeVirt hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-j8mz9/node-pool-7hngb in 33s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-hbvxj/node-pool-x745j in 1m36s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-x745j.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-x745j.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-x745j.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.157.109.219:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m39.125s util.go:565: Successfully waited for 0 nodes to become ready in 0s util.go:2949: Successfully waited for HostedCluster e2e-clusters-hbvxj/node-pool-x745j to have valid conditions in 2m24s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-j8mz9/node-pool-7hngb in 1m33s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-7hngb.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-7hngb.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-7hngb.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.51.7:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m57.025s util.go:565: Successfully waited for 0 nodes to become ready in 0s util.go:2949: Successfully waited for HostedCluster e2e-clusters-j8mz9/node-pool-7hngb to have valid conditions in 2m12s util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-us-east-1b in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-hbvxj/node-pool-x745j in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-hbvxj/node-pool-x745j in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s nodepool_kms_root_volume_test.go:42: Starting test KMSRootVolumeTest nodepool_kms_root_volume_test.go:54: retrieved KMS ARN: arn:aws:kms:us-east-1:820196288204:key/d3cdd9e0-3fd1-47a4-a559-72ae3672c5a6 util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-kms-root-volume in 6m54s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-kms-root-volume to have correct status in 3s nodepool_kms_root_volume_test.go:85: instanceID: i-06e377a2deb77cc57 nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-kms-root-volume to have correct status in 0s nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-inplaceupgrade in 10m27s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-inplaceupgrade to have correct status in 0s nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-inplaceupgrade to have version 4.22.0-0.ci-2026-01-09-005312 in 0s nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:d9262e3822e86ae5cd8dda21c94eaf92837e08c6551862951c81d82adab8fd96 nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-inplaceupgrade to start the upgrade in 3s nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-rolling-upgrade in 10m30s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-rolling-upgrade to have correct status in 0s nodepool_rolling_upgrade_test.go:106: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-rolling-upgrade to start the rolling upgrade in 3s util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-day2-tags in 8m12s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-day2-tags to have correct status in 0s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-day2-tags to have correct status in 0s nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-ntoperformanceprofile in 16m0s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-ntoperformanceprofile to have correct status in 0s nodepool_nto_performanceprofile_test.go:80: Entering NTO PerformanceProfile test nodepool_nto_performanceprofile_test.go:110: Hosted control plane namespace is e2e-clusters-hbvxj-node-pool-x745j nodepool_nto_performanceprofile_test.go:112: Successfully waited for performance profile ConfigMap to exist with correct name labels and annotations in 3s nodepool_nto_performanceprofile_test.go:159: Successfully waited for performance profile status ConfigMap to exist in 0s nodepool_nto_performanceprofile_test.go:201: Successfully waited for performance profile status to be reflected under the NodePool status in 0s nodepool_nto_performanceprofile_test.go:254: Deleting configmap reference from nodepool ... nodepool_nto_performanceprofile_test.go:261: Successfully waited for performance profile ConfigMap to be deleted in 3s nodepool_nto_performanceprofile_test.go:280: Ending NTO PerformanceProfile test: OK nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-ntoperformanceprofile to have correct status in 30s nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-tlcz2 in 12m57s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-tlcz2 to have correct status in 0s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-tlcz2 to have correct status in 0s nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-xfvrj in 16m3s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-xfvrj to have correct status in 0s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-xfvrj to have correct status in 0s nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. nodepool_test.go:348: NodePool version is outside supported skew, validating condition only (skipping node readiness check) nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-vhdwd to have correct status in 3s nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-mirrorconfigs in 12m54s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-mirrorconfigs to have correct status in 0s nodepool_mirrorconfigs_test.go:81: Entering MirrorConfigs test nodepool_mirrorconfigs_test.go:111: Hosted control plane namespace is e2e-clusters-hbvxj-node-pool-x745j nodepool_mirrorconfigs_test.go:113: Successfully waited for kubeletConfig should be mirrored and present in the hosted cluster in 3s nodepool_mirrorconfigs_test.go:157: Deleting KubeletConfig configmap reference from nodepool ... nodepool_mirrorconfigs_test.go:163: Successfully waited for KubeletConfig configmap to be deleted in 3s nodepool_mirrorconfigs_test.go:101: Exiting MirrorConfigs test: OK nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-mirrorconfigs to have correct status in 27s nodepool_imagetype_test.go:50: Starting test NodePoolImageTypeTest util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-j8mz9/node-pool-7hngb-us-east-1c in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-j8mz9/node-pool-7hngb in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-j8mz9/node-pool-7hngb in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s nodepool_additionalTrustBundlePropagation_test.go:38: Starting AdditionalTrustBundlePropagationTest. util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-j8mz9/node-pool-7hngb-test-additional-trust-bundle-propagation in 5m33s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-j8mz9/node-pool-7hngb-test-additional-trust-bundle-propagation to have correct status in 0s nodepool_additionalTrustBundlePropagation_test.go:72: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle nodepool_additionalTrustBundlePropagation_test.go:80: Successfully waited for Waiting for NodePool e2e-clusters-j8mz9/node-pool-7hngb-test-additional-trust-bundle-propagation to begin updating in 10s
TestNodePool/HostedCluster0
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-hbvxj/node-pool-x745j in 20s
TestNodePool/HostedCluster0/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-hbvxj/node-pool-x745j in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster0/Main/KubeVirtCacheTest
0s
nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtJsonPatchTest
0s
nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtNodeMultinetTest
0s
nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtNodeSelectorTest
0s
nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtQoSClassGuaranteedTest
0s
nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/OpenStackAdvancedTest
0s
nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack
TestNodePool/HostedCluster0/Main/TestImageTypes
0s
nodepool_imagetype_test.go:50: Starting test NodePoolImageTypeTest
TestNodePool/HostedCluster0/Main/TestKMSRootVolumeEncryption
0s
nodepool_kms_root_volume_test.go:42: Starting test KMSRootVolumeTest nodepool_kms_root_volume_test.go:54: retrieved KMS ARN: arn:aws:kms:us-east-1:820196288204:key/d3cdd9e0-3fd1-47a4-a559-72ae3672c5a6 util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-kms-root-volume in 6m54s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-kms-root-volume to have correct status in 3s nodepool_kms_root_volume_test.go:85: instanceID: i-06e377a2deb77cc57 nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-kms-root-volume to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestMirrorConfigs
0s
nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-mirrorconfigs in 12m54s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-mirrorconfigs to have correct status in 0s nodepool_mirrorconfigs_test.go:81: Entering MirrorConfigs test nodepool_mirrorconfigs_test.go:111: Hosted control plane namespace is e2e-clusters-hbvxj-node-pool-x745j nodepool_mirrorconfigs_test.go:113: Successfully waited for kubeletConfig should be mirrored and present in the hosted cluster in 3s nodepool_mirrorconfigs_test.go:157: Deleting KubeletConfig configmap reference from nodepool ... nodepool_mirrorconfigs_test.go:163: Successfully waited for KubeletConfig configmap to be deleted in 3s nodepool_mirrorconfigs_test.go:101: Exiting MirrorConfigs test: OK nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-mirrorconfigs to have correct status in 27s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace
0s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut
0s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
TestNodePool/HostedCluster0/Main/TestNTOPerformanceProfile
0s
nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-ntoperformanceprofile in 16m0s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-ntoperformanceprofile to have correct status in 0s nodepool_nto_performanceprofile_test.go:80: Entering NTO PerformanceProfile test nodepool_nto_performanceprofile_test.go:110: Hosted control plane namespace is e2e-clusters-hbvxj-node-pool-x745j nodepool_nto_performanceprofile_test.go:112: Successfully waited for performance profile ConfigMap to exist with correct name labels and annotations in 3s nodepool_nto_performanceprofile_test.go:159: Successfully waited for performance profile status ConfigMap to exist in 0s nodepool_nto_performanceprofile_test.go:201: Successfully waited for performance profile status to be reflected under the NodePool status in 0s nodepool_nto_performanceprofile_test.go:254: Deleting configmap reference from nodepool ... nodepool_nto_performanceprofile_test.go:261: Successfully waited for performance profile ConfigMap to be deleted in 3s nodepool_nto_performanceprofile_test.go:280: Ending NTO PerformanceProfile test: OK nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-ntoperformanceprofile to have correct status in 30s
TestNodePool/HostedCluster0/Main/TestNodePoolAutoRepair
0s
TestNodePool/HostedCluster0/Main/TestNodePoolDay2Tags
0s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-day2-tags in 8m12s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-day2-tags to have correct status in 0s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-day2-tags to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodePoolInPlaceUpgrade
0s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-inplaceupgrade in 10m27s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-inplaceupgrade to have correct status in 0s nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-inplaceupgrade to have version 4.22.0-0.ci-2026-01-09-005312 in 0s nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:d9262e3822e86ae5cd8dda21c94eaf92837e08c6551862951c81d82adab8fd96 nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-inplaceupgrade to start the upgrade in 3s
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN1
0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-tlcz2 in 12m57s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-tlcz2 to have correct status in 0s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-tlcz2 to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN2
0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-xfvrj in 16m3s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-xfvrj to have correct status in 0s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-xfvrj to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN3
0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN4
0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. nodepool_test.go:348: NodePool version is outside supported skew, validating condition only (skipping node readiness check) nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-vhdwd to have correct status in 3s
TestNodePool/HostedCluster0/Main/TestNodePoolReplaceUpgrade
0s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout
0s
nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest
TestNodePool/HostedCluster0/Main/TestRollingUpgrade
0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-rolling-upgrade in 10m30s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-rolling-upgrade to have correct status in 0s nodepool_rolling_upgrade_test.go:106: Successfully waited for NodePool e2e-clusters-hbvxj/node-pool-x745j-test-rolling-upgrade to start the rolling upgrade in 3s
TestNodePool/HostedCluster0/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-hbvxj/node-pool-x745j in 1m36s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-x745j.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-x745j.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-x745j.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.157.109.219:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m39.125s util.go:565: Successfully waited for 0 nodes to become ready in 0s util.go:2949: Successfully waited for HostedCluster e2e-clusters-hbvxj/node-pool-x745j to have valid conditions in 2m24s
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-hbvxj/node-pool-x745j in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-hbvxj/node-pool-x745j-us-east-1b in 25ms
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestNodePool/HostedCluster0/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestNodePool/HostedCluster1
0s
nodepool_test.go:150: tests only supported on platform KubeVirt
TestNodePool/HostedCluster2
0s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-j8mz9/node-pool-7hngb in 33s
TestNodePool/HostedCluster2/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-j8mz9/node-pool-7hngb in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation
0s
nodepool_additionalTrustBundlePropagation_test.go:38: Starting AdditionalTrustBundlePropagationTest. util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-j8mz9/node-pool-7hngb-test-additional-trust-bundle-propagation in 5m33s nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-j8mz9/node-pool-7hngb-test-additional-trust-bundle-propagation to have correct status in 0s
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation/AdditionalTrustBundlePropagationTest
0s
nodepool_additionalTrustBundlePropagation_test.go:72: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle nodepool_additionalTrustBundlePropagation_test.go:80: Successfully waited for Waiting for NodePool e2e-clusters-j8mz9/node-pool-7hngb-test-additional-trust-bundle-propagation to begin updating in 10s
TestNodePool/HostedCluster2/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-j8mz9/node-pool-7hngb in 1m33s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-7hngb.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-7hngb.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-7hngb.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.51.7:443: i/o timeout util.go:363: Successfully waited for a successful connection to the guest API server in 1m57.025s util.go:565: Successfully waited for 0 nodes to become ready in 0s util.go:2949: Successfully waited for HostedCluster e2e-clusters-j8mz9/node-pool-7hngb to have valid conditions in 2m12s
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-j8mz9/node-pool-7hngb in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-j8mz9/node-pool-7hngb-us-east-1c in 25ms
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestNodePool/HostedCluster2/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestUpgradeControlPlane
0s
control_plane_upgrade_test.go:25: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:51100f0e7a6c69f210772cfeb63281be86f29af18a520a7b139846380ff5a4aa, toImage: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:d9262e3822e86ae5cd8dda21c94eaf92837e08c6551862951c81d82adab8fd96 hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 48s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 1m51s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.134.213:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.134.213:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.134.213:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.134.213:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 3m3.025s util.go:565: Successfully waited for 2 nodes to become ready in 7m42s util.go:598: Successfully waited for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh to rollout in 4m3s util.go:2949: Successfully waited for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh to have valid conditions in 33s util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-8dfwz/control-plane-upgrade-2wxhh-us-east-1c in 25ms util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:d9262e3822e86ae5cd8dda21c94eaf92837e08c6551862951c81d82adab8fd96
TestUpgradeControlPlane/Main
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s util.go:363: Successfully waited for a successful connection to the guest API server in 0s control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:d9262e3822e86ae5cd8dda21c94eaf92837e08c6551862951c81d82adab8fd96
TestUpgradeControlPlane/Main/Wait_for_control_plane_components_to_complete_rollout
0s
TestUpgradeControlPlane/ValidateHostedCluster
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 1m51s util.go:301: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.134.213:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.134.213:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.134.213:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.81.101.117:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.134.213:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-2wxhh.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 18.207.25.199:443: connect: connection refused util.go:363: Successfully waited for a successful connection to the guest API server in 3m3.025s util.go:565: Successfully waited for 2 nodes to become ready in 7m42s util.go:598: Successfully waited for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh to rollout in 4m3s util.go:2949: Successfully waited for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh to have valid conditions in 33s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dfwz/control-plane-upgrade-2wxhh in 0s util.go:301: Successfully waited for kubeconfig secret to have data in 0s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-8dfwz/control-plane-upgrade-2wxhh-us-east-1c in 25ms
TestUpgradeControlPlane/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestUpgradeControlPlane/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster