Failed Tests
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-qx86n/autoscaling-frn7s in 2m28s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-qx86n/autoscaling-frn7s in 3m30.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-frn7s.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 20.225s
util.go:565: Successfully waited for 1 nodes to become ready in 10m21.05s
util.go:598: Successfully waited for HostedCluster e2e-clusters-qx86n/autoscaling-frn7s to rollout in 3m45.05s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-qx86n/autoscaling-frn7s to have valid conditions in 50ms
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-qx86n/autoscaling-frn7s in 175ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-qx86n/autoscaling-frn7s in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-qx86n/autoscaling-frn7s in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-qx86n/autoscaling-frn7s in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:565: Successfully waited for 1 nodes to become ready in 75ms
autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-qx86n, name: autoscaling-frn7s, min: 1, max: 3
autoscaling_test.go:137: Created workload. Node: autoscaling-frn7s-cf25b-hnffc, memcapacity: 15214984Ki
util.go:565: Successfully waited for 3 nodes to become ready in 5m18.075s
autoscaling_test.go:157: Deleted workload
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-qx86n/autoscaling-frn7s in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:565: Successfully waited for 1 nodes to become ready in 75ms
autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-qx86n, name: autoscaling-frn7s, min: 1, max: 3
autoscaling_test.go:137: Created workload. Node: autoscaling-frn7s-cf25b-hnffc, memcapacity: 15214984Ki
util.go:565: Successfully waited for 3 nodes to become ready in 5m18.075s
autoscaling_test.go:157: Deleted workload
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-qx86n/autoscaling-frn7s in 3m30.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-frn7s.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 20.225s
util.go:565: Successfully waited for 1 nodes to become ready in 10m21.05s
util.go:598: Successfully waited for HostedCluster e2e-clusters-qx86n/autoscaling-frn7s to rollout in 3m45.05s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-qx86n/autoscaling-frn7s to have valid conditions in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-qx86n/autoscaling-frn7s in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-qx86n/autoscaling-frn7s in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-qx86n/autoscaling-frn7s in 175ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-v24k6/azure-scheduler-thpbs in 2m15s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v24k6/azure-scheduler-thpbs in 3m15.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-thpbs.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 27.35s
util.go:565: Successfully waited for 2 nodes to become ready in 10m21.075s
util.go:598: Successfully waited for HostedCluster e2e-clusters-v24k6/azure-scheduler-thpbs to rollout in 3m0.05s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-v24k6/azure-scheduler-thpbs to have valid conditions in 50ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-v24k6/azure-scheduler-thpbs in 150ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v24k6/azure-scheduler-thpbs in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v24k6/azure-scheduler-thpbs in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v24k6/azure-scheduler-thpbs in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:565: Successfully waited for 2 nodes to become ready in 75ms
azure_scheduler_test.go:110: Updated clusterSizingConfig.
azure_scheduler_test.go:157: Successfully waited for HostedCluster size label and annotations updated in 50ms
azure_scheduler_test.go:149: Scaled Nodepool. Namespace: e2e-clusters-v24k6, name: azure-scheduler-thpbs, replicas: 0xc00300d7e0
util.go:565: Successfully waited for 3 nodes to become ready in 4m48.075s
azure_scheduler_test.go:157: Successfully waited for HostedCluster size label and annotations updated in 75ms
azure_scheduler_test.go:181: Successfully waited for control-plane-operator pod is running with expected resource request in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v24k6/azure-scheduler-thpbs in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:3224: Successfully waited for HostedCluster e2e-clusters-v24k6/azure-scheduler-thpbs to have valid Status.Payload in 75ms
util.go:1156: test only supported on AWS platform, saw Azure
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:3917: All 46 pods in namespace e2e-clusters-v24k6-azure-scheduler-thpbs have the expected RunAsUser UID 1002
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v24k6/azure-scheduler-thpbs in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:1156: test only supported on AWS platform, saw Azure
util.go:3224: Successfully waited for HostedCluster e2e-clusters-v24k6/azure-scheduler-thpbs to have valid Status.Payload in 75ms
util.go:3917: All 46 pods in namespace e2e-clusters-v24k6-azure-scheduler-thpbs have the expected RunAsUser UID 1002
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v24k6/azure-scheduler-thpbs in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:565: Successfully waited for 2 nodes to become ready in 75ms
azure_scheduler_test.go:110: Updated clusterSizingConfig.
azure_scheduler_test.go:157: Successfully waited for HostedCluster size label and annotations updated in 50ms
azure_scheduler_test.go:149: Scaled Nodepool. Namespace: e2e-clusters-v24k6, name: azure-scheduler-thpbs, replicas: 0xc00300d7e0
util.go:565: Successfully waited for 3 nodes to become ready in 4m48.075s
azure_scheduler_test.go:157: Successfully waited for HostedCluster size label and annotations updated in 75ms
azure_scheduler_test.go:181: Successfully waited for control-plane-operator pod is running with expected resource request in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v24k6/azure-scheduler-thpbs in 3m15.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-thpbs.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 27.35s
util.go:565: Successfully waited for 2 nodes to become ready in 10m21.075s
util.go:598: Successfully waited for HostedCluster e2e-clusters-v24k6/azure-scheduler-thpbs to rollout in 3m0.05s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-v24k6/azure-scheduler-thpbs to have valid conditions in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v24k6/azure-scheduler-thpbs in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v24k6/azure-scheduler-thpbs in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-v24k6/azure-scheduler-thpbs in 150ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 2m39s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 3m21.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-2zgr2.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
util.go:363: Successfully waited for a successful connection to the guest API server in 18.6s
util.go:565: Successfully waited for 2 nodes to become ready in 11m9.075s
util.go:598: Successfully waited for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 to rollout in 4m6.075s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 to have valid conditions in 50ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-2ss8j/create-cluster-2zgr2 in 175ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
create_cluster_test.go:2532: fetching mgmt kubeconfig
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 75ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:1928: NodePool replicas: 2, Available nodes: 2
util.go:2021: Deleting the additional-pull-secret secret in the DataPlane
control_plane_pki_operator.go:95: generating new break-glass credentials for more than one signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "1gc5bvxn6b1d79pb8qehemx9k284lzzpz9rg3zao4nau" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-2ss8j-create-cluster-2zgr2/1gc5bvxn6b1d79pb8qehemx9k284lzzpz9rg3zao4nau to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "1gc5bvxn6b1d79pb8qehemx9k284lzzpz9rg3zao4nau" to be approved and signed in 50ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "1th4r9m2uo5xnxg6oeel88gvptazc821adc0y3w9tfln" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-2ss8j-create-cluster-2zgr2/1th4r9m2uo5xnxg6oeel88gvptazc821adc0y3w9tfln to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "1th4r9m2uo5xnxg6oeel88gvptazc821adc0y3w9tfln" to be approved and signed in 50ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:99: revoking the "customer-break-glass" signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-2ss8j-create-cluster-2zgr2/1gc5bvxn6b1d79pb8qehemx9k284lzzpz9rg3zao4nau to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-2ss8j-create-cluster-2zgr2/1gc5bvxn6b1d79pb8qehemx9k284lzzpz9rg3zao4nau to complete in 2m9.05s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:102: ensuring the break-glass credentials from "sre-break-glass" signer still work
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-2ss8j-create-cluster-2zgr2/customer-system-admin-client-cert-key
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-2ss8j-create-cluster-2zgr2/sre-system-admin-client-cert-key
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "15580vrismq1lp4mnut5cehojudc40sjblistdrhr4js" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-2ss8j-create-cluster-2zgr2/15580vrismq1lp4mnut5cehojudc40sjblistdrhr4js to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "15580vrismq1lp4mnut5cehojudc40sjblistdrhr4js" to be approved and signed in 50ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:168: creating invalid CSR "14m48ro06ijy60bhg59aqk68p0rxzex9nbv33d371rkj" for signer "hypershift.openshift.io/e2e-clusters-2ss8j-create-cluster-2zgr2.sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:178: creating CSRA e2e-clusters-2ss8j-create-cluster-2zgr2/14m48ro06ijy60bhg59aqk68p0rxzex9nbv33d371rkj to trigger automatic approval of the CSR
control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "14m48ro06ijy60bhg59aqk68p0rxzex9nbv33d371rkj" to have invalid CN exposed in status in 50ms
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "1cqyjuzvgfeoqoqnrnvtqjmlnt504j0rx1fh6ixjyrcy" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-2ss8j-create-cluster-2zgr2/1cqyjuzvgfeoqoqnrnvtqjmlnt504j0rx1fh6ixjyrcy to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "1cqyjuzvgfeoqoqnrnvtqjmlnt504j0rx1fh6ixjyrcy" to be approved and signed in 3.075s
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:168: creating invalid CSR "1d8362mprhurw8rs0qlz747lxpm9am7yonde9j2a0f1v" for signer "hypershift.openshift.io/e2e-clusters-2ss8j-create-cluster-2zgr2.customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:178: creating CSRA e2e-clusters-2ss8j-create-cluster-2zgr2/1d8362mprhurw8rs0qlz747lxpm9am7yonde9j2a0f1v to trigger automatic approval of the CSR
control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "1d8362mprhurw8rs0qlz747lxpm9am7yonde9j2a0f1v" to have invalid CN exposed in status in 50ms
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-2ss8j-create-cluster-2zgr2/2fx92sr3fkjmqr2qzg6c2sv9vpvurt2wlobj9lcco4d6 to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-2ss8j-create-cluster-2zgr2/2fx92sr3fkjmqr2qzg6c2sv9vpvurt2wlobj9lcco4d6 to complete in 3m9.05s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-2ss8j-create-cluster-2zgr2/2rnkviviloblnc6h6ooxtt53ezi500t5ssob5g8vt8br to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-2ss8j-create-cluster-2zgr2/2rnkviviloblnc6h6ooxtt53ezi500t5ssob5g8vt8br to complete in 2m21.05s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
util.go:169: failed to patch object create-cluster-2zgr2, will retry: HostedCluster.hypershift.openshift.io "create-cluster-2zgr2" is invalid: [spec: Invalid value: "object": Services is immutable. Changes might result in unpredictable and disruptive behavior., spec.services[0].servicePublishingStrategy: Invalid value: "object": nodePort is required when type is NodePort, and forbidden otherwise, spec.services[0].servicePublishingStrategy: Invalid value: "object": only route is allowed when type is Route, and forbidden otherwise]
util.go:169: failed to patch object create-cluster-2zgr2, will retry: HostedCluster.hypershift.openshift.io "create-cluster-2zgr2" is invalid: spec.controllerAvailabilityPolicy: Invalid value: "string": ControllerAvailabilityPolicy is immutable
util.go:169: failed to patch object create-cluster-2zgr2, will retry: HostedCluster.hypershift.openshift.io "create-cluster-2zgr2" is invalid: spec.capabilities: Invalid value: "object": Capabilities is immutable. Changes might result in unpredictable and disruptive behavior.
util.go:2184: Using Azure-specific retry strategy for DNS propagation race condition
util.go:2193: Generating custom certificate with DNS name api-custom-cert-create-cluster-2zgr2.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2198: Creating custom certificate secret
util.go:2214: Updating hosted cluster with KubeAPIDNSName and KAS custom serving cert
util.go:2250: Getting custom kubeconfig client
util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 3.05s
util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 50ms
util.go:2255: waiting for the KubeAPIDNSName to be reconciled
util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 50ms
util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 50ms
util.go:2267: Finding the external name destination for the KAS Service
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2304: Creating a new KAS Service to be used by the external-dns deployment in CI with the custom DNS name api-custom-cert-create-cluster-2zgr2.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-10T17:15:35Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-2zgr2.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-10T17:15:45Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-2zgr2.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-10T17:15:55Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-2zgr2.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-10T17:16:05Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-2zgr2.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-10T17:16:15Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-2zgr2.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2331: resolved the custom DNS name after 40.091079185s
util.go:2336: Waiting until the KAS Deployment is ready
util.go:2431: Successfully waited for the KAS custom kubeconfig secret to be deleted from HC Namespace in 5.05s
util.go:2447: Successfully waited for the KAS custom kubeconfig secret to be deleted from HCP Namespace in 50ms
util.go:2477: Successfully waited for the KAS custom kubeconfig status to be removed in 50ms
util.go:2510: Deleting custom certificate secret
util.go:2351: Checking CustomAdminKubeconfigStatus are present
util.go:2359: Checking CustomAdminKubeconfigs are present
util.go:2372: Checking CustomAdminKubeconfig reaches the KAS
util.go:2374: Using extended retry timeout for Azure DNS propagation
util.go:2390: Successfully verified custom kubeconfig can reach KAS
util.go:2396: Checking CustomAdminKubeconfig Infrastructure status is updated
util.go:2397: Successfully waited for a successful connection to the custom DNS guest API server in 50ms
util.go:2465: Checking CustomAdminKubeconfig are removed
util.go:2494: Checking CustomAdminKubeconfigStatus are removed
util.go:3352: This test is only applicable for AWS platform
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:3748: Creating a pod which uses the restricted image
util.go:3773: Attempt 1/3: Creating pod
util.go:3778: Successfully created pod global-pull-secret-fail-pod in namespace kube-system on attempt 1
util.go:3799: Created pod global-pull-secret-fail-pod in namespace kube-system
util.go:3825: Pod is in the desired state, deleting it now
util.go:3828: Deleted the pod
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:3748: Creating a pod which uses the restricted image
util.go:3773: Attempt 1/3: Creating pod
util.go:3778: Successfully created pod global-pull-secret-success-pod in namespace kube-system on attempt 1
util.go:3799: Created pod global-pull-secret-success-pod in namespace kube-system
util.go:3816: Pod phase: Pending, shouldFail: false
util.go:3816: Pod phase: Pending, shouldFail: false
util.go:3816: Pod phase: Pending, shouldFail: false
util.go:3816: Pod phase: Running, shouldFail: false
util.go:3820: Pod is running! Continuing...
util.go:3825: Pod is in the desired state, deleting it now
util.go:3828: Deleted the pod
util.go:2041: Waiting for GlobalPullSecretDaemonSet to process the deletion and stabilize all nodes
util.go:2095: Waiting for global-pull-secret-syncer DaemonSet to be ready (using DesiredNumberScheduled)
util.go:2124: DaemonSet global-pull-secret-syncer update in flight: 0/2 pods updated
util.go:2130: DaemonSet global-pull-secret-syncer ready: 2/2 pods ready, rollout complete
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
globalps.go:209: Creating kubelet config verifier DaemonSet
globalps.go:214: Waiting for OVN, GlobalPullSecret, Konnectivity and kubelet config verifier DaemonSets to be ready
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet ovnkube-node not ready: 1/2 pods ready
util.go:2135: DaemonSet ovnkube-node not ready: 0/2 pods ready
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:2095: Waiting for kubelet-config-verifier DaemonSet to be ready (using DesiredNumberScheduled)
util.go:2130: DaemonSet kubelet-config-verifier ready: 0/2 pods ready, rollout complete
util.go:2147: ✓ kubelet-config-verifier DaemonSet is ready
globalps.go:229: Cleaning up kubelet config verifier DaemonSet
util_ingress_operator_configuration.go:28: Verifying HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 has custom Ingress Operator endpointPublishingStrategy
util_ingress_operator_configuration.go:37: Validating IngressController in guest cluster reflects the custom endpointPublishingStrategy
util_ingress_operator_configuration.go:38: Successfully waited for IngressController default in guest cluster to reflect the custom endpointPublishingStrategy in 75ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:3224: Successfully waited for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 to have valid Status.Payload in 125ms
util.go:1156: test only supported on AWS platform, saw Azure
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:3917: All 45 pods in namespace e2e-clusters-2ss8j-create-cluster-2zgr2 have the expected RunAsUser UID 1008
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:1156: test only supported on AWS platform, saw Azure
util.go:3224: Successfully waited for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 to have valid Status.Payload in 125ms
util.go:3917: All 45 pods in namespace e2e-clusters-2ss8j-create-cluster-2zgr2 have the expected RunAsUser UID 1008
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
create_cluster_test.go:2532: fetching mgmt kubeconfig
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 75ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:1928: NodePool replicas: 2, Available nodes: 2
util.go:2021: Deleting the additional-pull-secret secret in the DataPlane
globalps.go:209: Creating kubelet config verifier DaemonSet
globalps.go:214: Waiting for OVN, GlobalPullSecret, Konnectivity and kubelet config verifier DaemonSets to be ready
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet ovnkube-node not ready: 1/2 pods ready
util.go:2135: DaemonSet ovnkube-node not ready: 0/2 pods ready
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:2095: Waiting for kubelet-config-verifier DaemonSet to be ready (using DesiredNumberScheduled)
util.go:2130: DaemonSet kubelet-config-verifier ready: 0/2 pods ready, rollout complete
util.go:2147: ✓ kubelet-config-verifier DaemonSet is ready
globalps.go:229: Cleaning up kubelet config verifier DaemonSet
util.go:3748: Creating a pod which uses the restricted image
util.go:3773: Attempt 1/3: Creating pod
util.go:3778: Successfully created pod global-pull-secret-fail-pod in namespace kube-system on attempt 1
util.go:3799: Created pod global-pull-secret-fail-pod in namespace kube-system
util.go:3825: Pod is in the desired state, deleting it now
util.go:3828: Deleted the pod
util.go:3748: Creating a pod which uses the restricted image
util.go:3773: Attempt 1/3: Creating pod
util.go:3778: Successfully created pod global-pull-secret-success-pod in namespace kube-system on attempt 1
util.go:3799: Created pod global-pull-secret-success-pod in namespace kube-system
util.go:3816: Pod phase: Pending, shouldFail: false
util.go:3816: Pod phase: Pending, shouldFail: false
util.go:3816: Pod phase: Pending, shouldFail: false
util.go:3816: Pod phase: Running, shouldFail: false
util.go:3820: Pod is running! Continuing...
util.go:3825: Pod is in the desired state, deleting it now
util.go:3828: Deleted the pod
util.go:3352: This test is only applicable for AWS platform
util.go:169: failed to patch object create-cluster-2zgr2, will retry: HostedCluster.hypershift.openshift.io "create-cluster-2zgr2" is invalid: spec.capabilities: Invalid value: "object": Capabilities is immutable. Changes might result in unpredictable and disruptive behavior.
util.go:169: failed to patch object create-cluster-2zgr2, will retry: HostedCluster.hypershift.openshift.io "create-cluster-2zgr2" is invalid: [spec: Invalid value: "object": Services is immutable. Changes might result in unpredictable and disruptive behavior., spec.services[0].servicePublishingStrategy: Invalid value: "object": nodePort is required when type is NodePort, and forbidden otherwise, spec.services[0].servicePublishingStrategy: Invalid value: "object": only route is allowed when type is Route, and forbidden otherwise]
util.go:169: failed to patch object create-cluster-2zgr2, will retry: HostedCluster.hypershift.openshift.io "create-cluster-2zgr2" is invalid: spec.controllerAvailabilityPolicy: Invalid value: "string": ControllerAvailabilityPolicy is immutable
util_ingress_operator_configuration.go:28: Verifying HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 has custom Ingress Operator endpointPublishingStrategy
util_ingress_operator_configuration.go:37: Validating IngressController in guest cluster reflects the custom endpointPublishingStrategy
util_ingress_operator_configuration.go:38: Successfully waited for IngressController default in guest cluster to reflect the custom endpointPublishingStrategy in 75ms
util.go:2184: Using Azure-specific retry strategy for DNS propagation race condition
util.go:2193: Generating custom certificate with DNS name api-custom-cert-create-cluster-2zgr2.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2198: Creating custom certificate secret
util.go:2214: Updating hosted cluster with KubeAPIDNSName and KAS custom serving cert
util.go:2250: Getting custom kubeconfig client
util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 3.05s
util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 50ms
util.go:2255: waiting for the KubeAPIDNSName to be reconciled
util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 50ms
util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 50ms
util.go:2267: Finding the external name destination for the KAS Service
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2304: Creating a new KAS Service to be used by the external-dns deployment in CI with the custom DNS name api-custom-cert-create-cluster-2zgr2.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-10T17:15:35Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-2zgr2.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-10T17:15:45Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-2zgr2.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-10T17:15:55Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-2zgr2.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-10T17:16:05Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-2zgr2.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-10T17:16:15Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-2zgr2.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2331: resolved the custom DNS name after 40.091079185s
util.go:2336: Waiting until the KAS Deployment is ready
util.go:2431: Successfully waited for the KAS custom kubeconfig secret to be deleted from HC Namespace in 5.05s
util.go:2447: Successfully waited for the KAS custom kubeconfig secret to be deleted from HCP Namespace in 50ms
util.go:2477: Successfully waited for the KAS custom kubeconfig status to be removed in 50ms
util.go:2510: Deleting custom certificate secret
util.go:2359: Checking CustomAdminKubeconfigs are present
util.go:2396: Checking CustomAdminKubeconfig Infrastructure status is updated
util.go:2397: Successfully waited for a successful connection to the custom DNS guest API server in 50ms
util.go:2465: Checking CustomAdminKubeconfig are removed
util.go:2372: Checking CustomAdminKubeconfig reaches the KAS
util.go:2374: Using extended retry timeout for Azure DNS propagation
util.go:2390: Successfully verified custom kubeconfig can reach KAS
util.go:2351: Checking CustomAdminKubeconfigStatus are present
util.go:2494: Checking CustomAdminKubeconfigStatus are removed
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:2041: Waiting for GlobalPullSecretDaemonSet to process the deletion and stabilize all nodes
util.go:2095: Waiting for global-pull-secret-syncer DaemonSet to be ready (using DesiredNumberScheduled)
util.go:2124: DaemonSet global-pull-secret-syncer update in flight: 0/2 pods updated
util.go:2130: DaemonSet global-pull-secret-syncer ready: 2/2 pods ready, rollout complete
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "1cqyjuzvgfeoqoqnrnvtqjmlnt504j0rx1fh6ixjyrcy" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-2ss8j-create-cluster-2zgr2/1cqyjuzvgfeoqoqnrnvtqjmlnt504j0rx1fh6ixjyrcy to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "1cqyjuzvgfeoqoqnrnvtqjmlnt504j0rx1fh6ixjyrcy" to be approved and signed in 3.075s
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:168: creating invalid CSR "1d8362mprhurw8rs0qlz747lxpm9am7yonde9j2a0f1v" for signer "hypershift.openshift.io/e2e-clusters-2ss8j-create-cluster-2zgr2.customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:178: creating CSRA e2e-clusters-2ss8j-create-cluster-2zgr2/1d8362mprhurw8rs0qlz747lxpm9am7yonde9j2a0f1v to trigger automatic approval of the CSR
control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "1d8362mprhurw8rs0qlz747lxpm9am7yonde9j2a0f1v" to have invalid CN exposed in status in 50ms
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-2ss8j-create-cluster-2zgr2/2rnkviviloblnc6h6ooxtt53ezi500t5ssob5g8vt8br to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-2ss8j-create-cluster-2zgr2/2rnkviviloblnc6h6ooxtt53ezi500t5ssob5g8vt8br to complete in 2m21.05s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-2ss8j-create-cluster-2zgr2/customer-system-admin-client-cert-key
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:95: generating new break-glass credentials for more than one signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "1gc5bvxn6b1d79pb8qehemx9k284lzzpz9rg3zao4nau" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-2ss8j-create-cluster-2zgr2/1gc5bvxn6b1d79pb8qehemx9k284lzzpz9rg3zao4nau to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "1gc5bvxn6b1d79pb8qehemx9k284lzzpz9rg3zao4nau" to be approved and signed in 50ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "1th4r9m2uo5xnxg6oeel88gvptazc821adc0y3w9tfln" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-2ss8j-create-cluster-2zgr2/1th4r9m2uo5xnxg6oeel88gvptazc821adc0y3w9tfln to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "1th4r9m2uo5xnxg6oeel88gvptazc821adc0y3w9tfln" to be approved and signed in 50ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:99: revoking the "customer-break-glass" signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-2ss8j-create-cluster-2zgr2/1gc5bvxn6b1d79pb8qehemx9k284lzzpz9rg3zao4nau to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-2ss8j-create-cluster-2zgr2/1gc5bvxn6b1d79pb8qehemx9k284lzzpz9rg3zao4nau to complete in 2m9.05s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:102: ensuring the break-glass credentials from "sre-break-glass" signer still work
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "15580vrismq1lp4mnut5cehojudc40sjblistdrhr4js" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-2ss8j-create-cluster-2zgr2/15580vrismq1lp4mnut5cehojudc40sjblistdrhr4js to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "15580vrismq1lp4mnut5cehojudc40sjblistdrhr4js" to be approved and signed in 50ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:168: creating invalid CSR "14m48ro06ijy60bhg59aqk68p0rxzex9nbv33d371rkj" for signer "hypershift.openshift.io/e2e-clusters-2ss8j-create-cluster-2zgr2.sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:178: creating CSRA e2e-clusters-2ss8j-create-cluster-2zgr2/14m48ro06ijy60bhg59aqk68p0rxzex9nbv33d371rkj to trigger automatic approval of the CSR
control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "14m48ro06ijy60bhg59aqk68p0rxzex9nbv33d371rkj" to have invalid CN exposed in status in 50ms
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-2ss8j-create-cluster-2zgr2/2fx92sr3fkjmqr2qzg6c2sv9vpvurt2wlobj9lcco4d6 to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-2ss8j-create-cluster-2zgr2/2fx92sr3fkjmqr2qzg6c2sv9vpvurt2wlobj9lcco4d6 to complete in 3m9.05s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-2ss8j-create-cluster-2zgr2/sre-system-admin-client-cert-key
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 3m21.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-2zgr2.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
util.go:363: Successfully waited for a successful connection to the guest API server in 18.6s
util.go:565: Successfully waited for 2 nodes to become ready in 11m9.075s
util.go:598: Successfully waited for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 to rollout in 4m6.075s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 to have valid conditions in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2ss8j/create-cluster-2zgr2 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-2ss8j/create-cluster-2zgr2 in 175ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-ff2bt/custom-config-tql9c in 2m19s
hypershift_framework.go:491: Destroyed cluster. Namespace: e2e-clusters-ff2bt, name: custom-config-tql9c
hypershift_framework.go:446: archiving /logs/artifacts/TestCreateClusterCustomConfig/hostedcluster-custom-config-tql9c to /logs/artifacts/TestCreateClusterCustomConfig/hostedcluster.tar.gz
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-ff2bt/custom-config-tql9c in 3m9.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-tql9c.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-tql9c.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 1m13.775s
util.go:565: Successfully waited for 2 nodes to become ready in 10m42.1s
util.go:598: Successfully waited for HostedCluster e2e-clusters-ff2bt/custom-config-tql9c to rollout in 7m33.05s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-ff2bt/custom-config-tql9c to have valid conditions in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-ff2bt/custom-config-tql9c in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-h2cwl/node-pool-lqbc7 in 2m16s
nodepool_test.go:150: tests only supported on platform KubeVirt
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-zczv6/node-pool-x6jh8 in 2m20s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-h2cwl/node-pool-lqbc7 in 3m27.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-lqbc7.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 17.45s
util.go:565: Successfully waited for 0 nodes to become ready in 75ms
util.go:2949: Successfully waited for HostedCluster e2e-clusters-h2cwl/node-pool-lqbc7 to have valid conditions in 10m6.1s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zczv6/node-pool-x6jh8 in 3m21.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-x6jh8.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 20.2s
util.go:565: Successfully waited for 0 nodes to become ready in 50ms
util.go:2949: Successfully waited for HostedCluster e2e-clusters-zczv6/node-pool-x6jh8 to have valid conditions in 10m6.05s
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-h2cwl/node-pool-lqbc7 in 125ms
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-zczv6/node-pool-x6jh8 in 125ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-h2cwl/node-pool-lqbc7 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zczv6/node-pool-x6jh8 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-h2cwl/node-pool-lqbc7 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 50ms
nodepool_kms_root_volume_test.go:39: test only supported on platform AWS
nodepool_autorepair_test.go:42: test only supported on platform AWS
nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-ntomachineconfig-replace in 8m15.175s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-ntomachineconfig-replace to have correct status in 50ms
util.go:474: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-ntomachineconfig-replace to start config update in 15.05s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-ntomachineconfig-inplace in 6m21.075s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-ntomachineconfig-inplace to have correct status in 50ms
util.go:474: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-ntomachineconfig-inplace to start config update in 15.05s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-replaceupgrade in 15m9.2s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-replaceupgrade to have correct status in 100ms
nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints
nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-replaceupgrade to have version 4.22.0-0.ci-2026-01-09-005312 in 50ms
nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:d9262e3822e86ae5cd8dda21c94eaf92837e08c6551862951c81d82adab8fd96
nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-replaceupgrade to start the upgrade in 3.05s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt
nodepool_day2_tags_test.go:43: test only supported on platform AWS
nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt
nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt
nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt
nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt
nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest
nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack
nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-f7gjz in 11m24.075s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-f7gjz to have correct status in 50ms
nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release.
nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-f7gjz to have correct status in 50ms
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-fr9d7 in 17m48.05s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-fr9d7 to have correct status in 50ms
nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release.
nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-fr9d7 to have correct status in 50ms
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_test.go:348: NodePool version is outside supported skew, validating condition only (skipping node readiness check)
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-nv82v to have correct status in 6.05s
nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest
nodepool_imagetype_test.go:45: test is only supported for AWS platform
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zczv6/node-pool-x6jh8 in 100ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 50ms
nodepool_additionalTrustBundlePropagation_test.go:38: Starting AdditionalTrustBundlePropagationTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-zczv6/node-pool-x6jh8-test-additional-trust-bundle-propagation in 6m0.05s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-zczv6/node-pool-x6jh8-test-additional-trust-bundle-propagation to have correct status in 50ms
nodepool_additionalTrustBundlePropagation_test.go:72: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle
nodepool_additionalTrustBundlePropagation_test.go:80: Successfully waited for Waiting for NodePool e2e-clusters-zczv6/node-pool-x6jh8-test-additional-trust-bundle-propagation to begin updating in 10.05s
nodepool_additionalTrustBundlePropagation_test.go:94: Successfully waited for Waiting for NodePool e2e-clusters-zczv6/node-pool-x6jh8-test-additional-trust-bundle-propagation to stop updating in 10m30.05s
nodepool_additionalTrustBundlePropagation_test.go:112: Updating hosted cluster by removing additional trust bundle.
nodepool_additionalTrustBundlePropagation_test.go:126: Successfully waited for Waiting for control plane operator deployment to be updated in 75ms
nodepool_additionalTrustBundlePropagation_test.go:147: Successfully waited for Waiting for NodePool e2e-clusters-zczv6/node-pool-x6jh8-test-additional-trust-bundle-propagation to begin updating in 10.05s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-h2cwl/node-pool-lqbc7 in 2m16s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-h2cwl/node-pool-lqbc7 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 50ms
nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt
nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt
nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt
nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt
nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt
nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest
nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack
nodepool_imagetype_test.go:45: test is only supported for AWS platform
nodepool_kms_root_volume_test.go:39: test only supported on platform AWS
nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-ntomachineconfig-inplace in 6m21.075s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-ntomachineconfig-inplace to have correct status in 50ms
util.go:474: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-ntomachineconfig-inplace to start config update in 15.05s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-ntomachineconfig-replace in 8m15.175s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-ntomachineconfig-replace to have correct status in 50ms
util.go:474: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-ntomachineconfig-replace to start config update in 15.05s
nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest
nodepool_autorepair_test.go:42: test only supported on platform AWS
nodepool_day2_tags_test.go:43: test only supported on platform AWS
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-f7gjz in 11m24.075s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-f7gjz to have correct status in 50ms
nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release.
nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-f7gjz to have correct status in 50ms
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-fr9d7 in 17m48.05s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-fr9d7 to have correct status in 50ms
nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release.
nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-fr9d7 to have correct status in 50ms
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_test.go:348: NodePool version is outside supported skew, validating condition only (skipping node readiness check)
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-nv82v to have correct status in 6.05s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-replaceupgrade in 15m9.2s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-replaceupgrade to have correct status in 100ms
nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints
nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-replaceupgrade to have version 4.22.0-0.ci-2026-01-09-005312 in 50ms
nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:d9262e3822e86ae5cd8dda21c94eaf92837e08c6551862951c81d82adab8fd96
nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-h2cwl/node-pool-lqbc7-test-replaceupgrade to start the upgrade in 3.05s
nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-h2cwl/node-pool-lqbc7 in 3m27.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-lqbc7.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 17.45s
util.go:565: Successfully waited for 0 nodes to become ready in 75ms
util.go:2949: Successfully waited for HostedCluster e2e-clusters-h2cwl/node-pool-lqbc7 to have valid conditions in 10m6.1s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-h2cwl/node-pool-lqbc7 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-h2cwl/node-pool-lqbc7 in 125ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
nodepool_test.go:150: tests only supported on platform KubeVirt
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-zczv6/node-pool-x6jh8 in 2m20s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zczv6/node-pool-x6jh8 in 100ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 50ms
nodepool_additionalTrustBundlePropagation_test.go:38: Starting AdditionalTrustBundlePropagationTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-zczv6/node-pool-x6jh8-test-additional-trust-bundle-propagation in 6m0.05s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-zczv6/node-pool-x6jh8-test-additional-trust-bundle-propagation to have correct status in 50ms
nodepool_additionalTrustBundlePropagation_test.go:72: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle
nodepool_additionalTrustBundlePropagation_test.go:80: Successfully waited for Waiting for NodePool e2e-clusters-zczv6/node-pool-x6jh8-test-additional-trust-bundle-propagation to begin updating in 10.05s
nodepool_additionalTrustBundlePropagation_test.go:94: Successfully waited for Waiting for NodePool e2e-clusters-zczv6/node-pool-x6jh8-test-additional-trust-bundle-propagation to stop updating in 10m30.05s
nodepool_additionalTrustBundlePropagation_test.go:112: Updating hosted cluster by removing additional trust bundle.
nodepool_additionalTrustBundlePropagation_test.go:126: Successfully waited for Waiting for control plane operator deployment to be updated in 75ms
nodepool_additionalTrustBundlePropagation_test.go:147: Successfully waited for Waiting for NodePool e2e-clusters-zczv6/node-pool-x6jh8-test-additional-trust-bundle-propagation to begin updating in 10.05s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zczv6/node-pool-x6jh8 in 3m21.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-x6jh8.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 20.2s
util.go:565: Successfully waited for 0 nodes to become ready in 50ms
util.go:2949: Successfully waited for HostedCluster e2e-clusters-zczv6/node-pool-x6jh8 to have valid conditions in 10m6.05s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zczv6/node-pool-x6jh8 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-zczv6/node-pool-x6jh8 in 125ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
control_plane_upgrade_test.go:25: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:51100f0e7a6c69f210772cfeb63281be86f29af18a520a7b139846380ff5a4aa, toImage: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:d9262e3822e86ae5cd8dda21c94eaf92837e08c6551862951c81d82adab8fd96
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-h4mcv/control-plane-upgrade-4gmcn in 2m28s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-h4mcv/control-plane-upgrade-4gmcn in 4m9.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-4gmcn.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-4gmcn.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-4gmcn.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
util.go:363: Successfully waited for a successful connection to the guest API server in 37.9s
util.go:565: Successfully waited for 2 nodes to become ready in 8m36.225s
util.go:598: Successfully waited for HostedCluster e2e-clusters-h4mcv/control-plane-upgrade-4gmcn to rollout in 7m39.075s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-h4mcv/control-plane-upgrade-4gmcn to have valid conditions in 50ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-h4mcv/control-plane-upgrade-4gmcn in 225ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-h4mcv/control-plane-upgrade-4gmcn in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-h4mcv/control-plane-upgrade-4gmcn in 75ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-h4mcv/control-plane-upgrade-4gmcn in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 50ms
control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:d9262e3822e86ae5cd8dda21c94eaf92837e08c6551862951c81d82adab8fd96
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-h4mcv/control-plane-upgrade-4gmcn in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 50ms
control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-3h29njck/release@sha256:d9262e3822e86ae5cd8dda21c94eaf92837e08c6551862951c81d82adab8fd96
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-h4mcv/control-plane-upgrade-4gmcn in 4m9.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-4gmcn.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-4gmcn.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-4gmcn.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
util.go:363: Successfully waited for a successful connection to the guest API server in 37.9s
util.go:565: Successfully waited for 2 nodes to become ready in 8m36.225s
util.go:598: Successfully waited for HostedCluster e2e-clusters-h4mcv/control-plane-upgrade-4gmcn to rollout in 7m39.075s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-h4mcv/control-plane-upgrade-4gmcn to have valid conditions in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-h4mcv/control-plane-upgrade-4gmcn in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-h4mcv/control-plane-upgrade-4gmcn in 75ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-h4mcv/control-plane-upgrade-4gmcn in 225ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster