Failed Tests
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-v8zlz/autoscaling-nprz2 in 2m36s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v8zlz/autoscaling-nprz2 in 3m3.075s
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-nprz2.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-nprz2.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-nprz2.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-nprz2.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 34.525s
util.go:565: Successfully waited for 1 nodes to become ready in 12m48.025s
util.go:598: Successfully waited for HostedCluster e2e-clusters-v8zlz/autoscaling-nprz2 to rollout in 7m18.05s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-v8zlz/autoscaling-nprz2 to have valid conditions in 50ms
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-v8zlz/autoscaling-nprz2 in 175ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v8zlz/autoscaling-nprz2 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v8zlz/autoscaling-nprz2 in 75ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v8zlz/autoscaling-nprz2 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:565: Successfully waited for 1 nodes to become ready in 75ms
autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-v8zlz, name: autoscaling-nprz2, min: 1, max: 3
autoscaling_test.go:137: Created workload. Node: autoscaling-nprz2-xzmqh-z8pr9, memcapacity: 15214976Ki
util.go:565: Successfully waited for 3 nodes to become ready in 5m42.075s
autoscaling_test.go:157: Deleted workload
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v8zlz/autoscaling-nprz2 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:565: Successfully waited for 1 nodes to become ready in 75ms
autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-v8zlz, name: autoscaling-nprz2, min: 1, max: 3
autoscaling_test.go:137: Created workload. Node: autoscaling-nprz2-xzmqh-z8pr9, memcapacity: 15214976Ki
util.go:565: Successfully waited for 3 nodes to become ready in 5m42.075s
autoscaling_test.go:157: Deleted workload
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v8zlz/autoscaling-nprz2 in 3m3.075s
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-nprz2.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-nprz2.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-nprz2.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-nprz2.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 34.525s
util.go:565: Successfully waited for 1 nodes to become ready in 12m48.025s
util.go:598: Successfully waited for HostedCluster e2e-clusters-v8zlz/autoscaling-nprz2 to rollout in 7m18.05s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-v8zlz/autoscaling-nprz2 to have valid conditions in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v8zlz/autoscaling-nprz2 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v8zlz/autoscaling-nprz2 in 75ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-v8zlz/autoscaling-nprz2 in 175ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-lrpxg/azure-scheduler-rmdms in 2m24s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lrpxg/azure-scheduler-rmdms in 3m0.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-rmdms.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-rmdms.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
util.go:363: Successfully waited for a successful connection to the guest API server in 19.3s
util.go:565: Successfully waited for 2 nodes to become ready in 11m48.075s
util.go:598: Successfully waited for HostedCluster e2e-clusters-lrpxg/azure-scheduler-rmdms to rollout in 3m48.075s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-lrpxg/azure-scheduler-rmdms to have valid conditions in 50ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-lrpxg/azure-scheduler-rmdms in 175ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lrpxg/azure-scheduler-rmdms in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lrpxg/azure-scheduler-rmdms in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lrpxg/azure-scheduler-rmdms in 25ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:363: Successfully waited for a successful connection to the guest API server in 250ms
util.go:565: Successfully waited for 2 nodes to become ready in 75ms
azure_scheduler_test.go:110: Updated clusterSizingConfig.
azure_scheduler_test.go:157: Successfully waited for HostedCluster size label and annotations updated in 50ms
azure_scheduler_test.go:149: Scaled Nodepool. Namespace: e2e-clusters-lrpxg, name: azure-scheduler-rmdms, replicas: 0xc003d11eb0
util.go:565: Successfully waited for 3 nodes to become ready in 5m30.1s
azure_scheduler_test.go:157: Successfully waited for HostedCluster size label and annotations updated in 50ms
azure_scheduler_test.go:181: Successfully waited for control-plane-operator pod is running with expected resource request in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lrpxg/azure-scheduler-rmdms in 25ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:3224: Successfully waited for HostedCluster e2e-clusters-lrpxg/azure-scheduler-rmdms to have valid Status.Payload in 75ms
util.go:1156: test only supported on AWS platform, saw Azure
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:3917: All 46 pods in namespace e2e-clusters-lrpxg-azure-scheduler-rmdms have the expected RunAsUser UID 1005
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lrpxg/azure-scheduler-rmdms in 25ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:1156: test only supported on AWS platform, saw Azure
util.go:3224: Successfully waited for HostedCluster e2e-clusters-lrpxg/azure-scheduler-rmdms to have valid Status.Payload in 75ms
util.go:3917: All 46 pods in namespace e2e-clusters-lrpxg-azure-scheduler-rmdms have the expected RunAsUser UID 1005
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lrpxg/azure-scheduler-rmdms in 25ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:363: Successfully waited for a successful connection to the guest API server in 250ms
util.go:565: Successfully waited for 2 nodes to become ready in 75ms
azure_scheduler_test.go:110: Updated clusterSizingConfig.
azure_scheduler_test.go:157: Successfully waited for HostedCluster size label and annotations updated in 50ms
azure_scheduler_test.go:149: Scaled Nodepool. Namespace: e2e-clusters-lrpxg, name: azure-scheduler-rmdms, replicas: 0xc003d11eb0
util.go:565: Successfully waited for 3 nodes to become ready in 5m30.1s
azure_scheduler_test.go:157: Successfully waited for HostedCluster size label and annotations updated in 50ms
azure_scheduler_test.go:181: Successfully waited for control-plane-operator pod is running with expected resource request in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lrpxg/azure-scheduler-rmdms in 3m0.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-rmdms.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-rmdms.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
util.go:363: Successfully waited for a successful connection to the guest API server in 19.3s
util.go:565: Successfully waited for 2 nodes to become ready in 11m48.075s
util.go:598: Successfully waited for HostedCluster e2e-clusters-lrpxg/azure-scheduler-rmdms to rollout in 3m48.075s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-lrpxg/azure-scheduler-rmdms to have valid conditions in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lrpxg/azure-scheduler-rmdms in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-lrpxg/azure-scheduler-rmdms in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-lrpxg/azure-scheduler-rmdms in 175ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-9hj9b/create-cluster-rc9fk in 2m18s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 2m57.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 11.15s
util.go:565: Successfully waited for 2 nodes to become ready in 9m54.075s
util.go:598: Successfully waited for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk to rollout in 3m42.05s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk to have valid conditions in 50ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-9hj9b/create-cluster-rc9fk in 150ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
create_cluster_test.go:2532: fetching mgmt kubeconfig
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:1928: NodePool replicas: 2, Available nodes: 2
util.go:2021: Deleting the additional-pull-secret secret in the DataPlane
control_plane_pki_operator.go:95: generating new break-glass credentials for more than one signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "100iq3e5i5nxpt25dgemjumeqsghyaxrro4rmjnef8t8" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-9hj9b-create-cluster-rc9fk/100iq3e5i5nxpt25dgemjumeqsghyaxrro4rmjnef8t8 to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "100iq3e5i5nxpt25dgemjumeqsghyaxrro4rmjnef8t8" to be approved and signed in 25ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "2wzwcmt69niwu2ug1i5hj3c2ohemy6bde8jpwai7uo54" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-9hj9b-create-cluster-rc9fk/2wzwcmt69niwu2ug1i5hj3c2ohemy6bde8jpwai7uo54 to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "2wzwcmt69niwu2ug1i5hj3c2ohemy6bde8jpwai7uo54" to be approved and signed in 25ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:99: revoking the "customer-break-glass" signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-9hj9b-create-cluster-rc9fk/100iq3e5i5nxpt25dgemjumeqsghyaxrro4rmjnef8t8 to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-9hj9b-create-cluster-rc9fk/100iq3e5i5nxpt25dgemjumeqsghyaxrro4rmjnef8t8 to complete in 3m45.025s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:102: ensuring the break-glass credentials from "sre-break-glass" signer still work
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-9hj9b-create-cluster-rc9fk/customer-system-admin-client-cert-key
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-9hj9b-create-cluster-rc9fk/sre-system-admin-client-cert-key
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "2tfpmdk40r8dp2qan8do9aqkxz71crerqqjmkmqu0rai" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-9hj9b-create-cluster-rc9fk/2tfpmdk40r8dp2qan8do9aqkxz71crerqqjmkmqu0rai to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "2tfpmdk40r8dp2qan8do9aqkxz71crerqqjmkmqu0rai" to be approved and signed in 25ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:168: creating invalid CSR "2gxe1ctdvxj4nxuvyp2rh7ud69tjyd44e34xekqymwgs" for signer "hypershift.openshift.io/e2e-clusters-9hj9b-create-cluster-rc9fk.customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:178: creating CSRA e2e-clusters-9hj9b-create-cluster-rc9fk/2gxe1ctdvxj4nxuvyp2rh7ud69tjyd44e34xekqymwgs to trigger automatic approval of the CSR
control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "2gxe1ctdvxj4nxuvyp2rh7ud69tjyd44e34xekqymwgs" to have invalid CN exposed in status in 25ms
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "d0h2xzadltqfkrhnd1g3krgb21f8l0ranklykbjbyq9" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-9hj9b-create-cluster-rc9fk/d0h2xzadltqfkrhnd1g3krgb21f8l0ranklykbjbyq9 to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "d0h2xzadltqfkrhnd1g3krgb21f8l0ranklykbjbyq9" to be approved and signed in 25ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:168: creating invalid CSR "3xrjejwj5ypcticcrt1wm15dmpy0pakzfxghhtnpkdm" for signer "hypershift.openshift.io/e2e-clusters-9hj9b-create-cluster-rc9fk.sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:178: creating CSRA e2e-clusters-9hj9b-create-cluster-rc9fk/3xrjejwj5ypcticcrt1wm15dmpy0pakzfxghhtnpkdm to trigger automatic approval of the CSR
control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "3xrjejwj5ypcticcrt1wm15dmpy0pakzfxghhtnpkdm" to have invalid CN exposed in status in 25ms
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-9hj9b-create-cluster-rc9fk/1re7hc207plyye2brvt3rad5cwlqrpkg8wk0bhhlyprf to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-9hj9b-create-cluster-rc9fk/1re7hc207plyye2brvt3rad5cwlqrpkg8wk0bhhlyprf to complete in 2m45.025s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-9hj9b-create-cluster-rc9fk/5j2z3zo0x7wl8gzi4bobijpkgv7c1h1odz7ycm93zqz to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-9hj9b-create-cluster-rc9fk/5j2z3zo0x7wl8gzi4bobijpkgv7c1h1odz7ycm93zqz to complete in 2m36.025s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
util.go:169: failed to patch object create-cluster-rc9fk, will retry: HostedCluster.hypershift.openshift.io "create-cluster-rc9fk" is invalid: [spec: Invalid value: "object": Services is immutable. Changes might result in unpredictable and disruptive behavior., spec.services[0].servicePublishingStrategy: Invalid value: "object": nodePort is required when type is NodePort, and forbidden otherwise, spec.services[0].servicePublishingStrategy: Invalid value: "object": only route is allowed when type is Route, and forbidden otherwise]
util.go:169: failed to patch object create-cluster-rc9fk, will retry: HostedCluster.hypershift.openshift.io "create-cluster-rc9fk" is invalid: spec.controllerAvailabilityPolicy: Invalid value: "string": ControllerAvailabilityPolicy is immutable
util.go:169: failed to patch object create-cluster-rc9fk, will retry: HostedCluster.hypershift.openshift.io "create-cluster-rc9fk" is invalid: spec.capabilities: Invalid value: "object": Capabilities is immutable. Changes might result in unpredictable and disruptive behavior.
util.go:2184: Using Azure-specific retry strategy for DNS propagation race condition
util.go:2193: Generating custom certificate with DNS name api-custom-cert-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2198: Creating custom certificate secret
util.go:2214: Updating hosted cluster with KubeAPIDNSName and KAS custom serving cert
util.go:2250: Getting custom kubeconfig client
util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 3.075s
util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 25ms
util.go:2255: waiting for the KubeAPIDNSName to be reconciled
util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 50ms
util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 25ms
util.go:2267: Finding the external name destination for the KAS Service
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2304: Creating a new KAS Service to be used by the external-dns deployment in CI with the custom DNS name api-custom-cert-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-09T02:18:41Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-09T02:18:51Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-09T02:19:01Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-09T02:19:11Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-09T02:19:21Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-09T02:19:31Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2331: resolved the custom DNS name after 50.104150296s
util.go:2336: Waiting until the KAS Deployment is ready
util.go:2431: Successfully waited for the KAS custom kubeconfig secret to be deleted from HC Namespace in 5.075s
util.go:2447: Successfully waited for the KAS custom kubeconfig secret to be deleted from HCP Namespace in 50ms
util.go:2477: Successfully waited for the KAS custom kubeconfig status to be removed in 50ms
util.go:2510: Deleting custom certificate secret
util.go:2351: Checking CustomAdminKubeconfigStatus are present
util.go:2359: Checking CustomAdminKubeconfigs are present
util.go:2372: Checking CustomAdminKubeconfig reaches the KAS
util.go:2374: Using extended retry timeout for Azure DNS propagation
util.go:2390: Successfully verified custom kubeconfig can reach KAS
util.go:2396: Checking CustomAdminKubeconfig Infrastructure status is updated
util.go:2397: Successfully waited for a successful connection to the custom DNS guest API server in 50ms
util.go:2465: Checking CustomAdminKubeconfig are removed
util.go:2494: Checking CustomAdminKubeconfigStatus are removed
util.go:3352: This test is only applicable for AWS platform
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:3748: Creating a pod which uses the restricted image
util.go:3773: Attempt 1/3: Creating pod
util.go:3778: Successfully created pod global-pull-secret-fail-pod in namespace kube-system on attempt 1
util.go:3799: Created pod global-pull-secret-fail-pod in namespace kube-system
util.go:3825: Pod is in the desired state, deleting it now
util.go:3828: Deleted the pod
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:3748: Creating a pod which uses the restricted image
util.go:3773: Attempt 1/3: Creating pod
util.go:3778: Successfully created pod global-pull-secret-success-pod in namespace kube-system on attempt 1
util.go:3799: Created pod global-pull-secret-success-pod in namespace kube-system
util.go:3816: Pod phase: Pending, shouldFail: false
util.go:3816: Pod phase: Pending, shouldFail: false
util.go:3816: Pod phase: Running, shouldFail: false
util.go:3820: Pod is running! Continuing...
util.go:3825: Pod is in the desired state, deleting it now
util.go:3828: Deleted the pod
util.go:2041: Waiting for GlobalPullSecretDaemonSet to process the deletion and stabilize all nodes
util.go:2095: Waiting for global-pull-secret-syncer DaemonSet to be ready (using DesiredNumberScheduled)
util.go:2124: DaemonSet global-pull-secret-syncer update in flight: 0/2 pods updated
util.go:2130: DaemonSet global-pull-secret-syncer ready: 1/2 pods ready, rollout complete
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
globalps.go:209: Creating kubelet config verifier DaemonSet
globalps.go:214: Waiting for OVN, GlobalPullSecret, Konnectivity and kubelet config verifier DaemonSets to be ready
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet ovnkube-node not ready: 1/2 pods ready
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:2095: Waiting for kubelet-config-verifier DaemonSet to be ready (using DesiredNumberScheduled)
util.go:2130: DaemonSet kubelet-config-verifier ready: 0/2 pods ready, rollout complete
util.go:2147: ✓ kubelet-config-verifier DaemonSet is ready
globalps.go:229: Cleaning up kubelet config verifier DaemonSet
util_ingress_operator_configuration.go:28: Verifying HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk has custom Ingress Operator endpointPublishingStrategy
util_ingress_operator_configuration.go:37: Validating IngressController in guest cluster reflects the custom endpointPublishingStrategy
util_ingress_operator_configuration.go:38: Successfully waited for IngressController default in guest cluster to reflect the custom endpointPublishingStrategy in 75ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:3224: Successfully waited for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk to have valid Status.Payload in 100ms
util.go:1156: test only supported on AWS platform, saw Azure
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:3917: All 45 pods in namespace e2e-clusters-9hj9b-create-cluster-rc9fk have the expected RunAsUser UID 1004
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:1156: test only supported on AWS platform, saw Azure
util.go:3224: Successfully waited for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk to have valid Status.Payload in 100ms
util.go:3917: All 45 pods in namespace e2e-clusters-9hj9b-create-cluster-rc9fk have the expected RunAsUser UID 1004
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
create_cluster_test.go:2532: fetching mgmt kubeconfig
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:1928: NodePool replicas: 2, Available nodes: 2
util.go:2021: Deleting the additional-pull-secret secret in the DataPlane
globalps.go:209: Creating kubelet config verifier DaemonSet
globalps.go:214: Waiting for OVN, GlobalPullSecret, Konnectivity and kubelet config verifier DaemonSets to be ready
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet ovnkube-node not ready: 1/2 pods ready
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:2095: Waiting for kubelet-config-verifier DaemonSet to be ready (using DesiredNumberScheduled)
util.go:2130: DaemonSet kubelet-config-verifier ready: 0/2 pods ready, rollout complete
util.go:2147: ✓ kubelet-config-verifier DaemonSet is ready
globalps.go:229: Cleaning up kubelet config verifier DaemonSet
util.go:3748: Creating a pod which uses the restricted image
util.go:3773: Attempt 1/3: Creating pod
util.go:3778: Successfully created pod global-pull-secret-fail-pod in namespace kube-system on attempt 1
util.go:3799: Created pod global-pull-secret-fail-pod in namespace kube-system
util.go:3825: Pod is in the desired state, deleting it now
util.go:3828: Deleted the pod
util.go:3748: Creating a pod which uses the restricted image
util.go:3773: Attempt 1/3: Creating pod
util.go:3778: Successfully created pod global-pull-secret-success-pod in namespace kube-system on attempt 1
util.go:3799: Created pod global-pull-secret-success-pod in namespace kube-system
util.go:3816: Pod phase: Pending, shouldFail: false
util.go:3816: Pod phase: Pending, shouldFail: false
util.go:3816: Pod phase: Running, shouldFail: false
util.go:3820: Pod is running! Continuing...
util.go:3825: Pod is in the desired state, deleting it now
util.go:3828: Deleted the pod
util.go:3352: This test is only applicable for AWS platform
util.go:169: failed to patch object create-cluster-rc9fk, will retry: HostedCluster.hypershift.openshift.io "create-cluster-rc9fk" is invalid: spec.capabilities: Invalid value: "object": Capabilities is immutable. Changes might result in unpredictable and disruptive behavior.
util.go:169: failed to patch object create-cluster-rc9fk, will retry: HostedCluster.hypershift.openshift.io "create-cluster-rc9fk" is invalid: [spec: Invalid value: "object": Services is immutable. Changes might result in unpredictable and disruptive behavior., spec.services[0].servicePublishingStrategy: Invalid value: "object": nodePort is required when type is NodePort, and forbidden otherwise, spec.services[0].servicePublishingStrategy: Invalid value: "object": only route is allowed when type is Route, and forbidden otherwise]
util.go:169: failed to patch object create-cluster-rc9fk, will retry: HostedCluster.hypershift.openshift.io "create-cluster-rc9fk" is invalid: spec.controllerAvailabilityPolicy: Invalid value: "string": ControllerAvailabilityPolicy is immutable
util_ingress_operator_configuration.go:28: Verifying HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk has custom Ingress Operator endpointPublishingStrategy
util_ingress_operator_configuration.go:37: Validating IngressController in guest cluster reflects the custom endpointPublishingStrategy
util_ingress_operator_configuration.go:38: Successfully waited for IngressController default in guest cluster to reflect the custom endpointPublishingStrategy in 75ms
util.go:2184: Using Azure-specific retry strategy for DNS propagation race condition
util.go:2193: Generating custom certificate with DNS name api-custom-cert-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2198: Creating custom certificate secret
util.go:2214: Updating hosted cluster with KubeAPIDNSName and KAS custom serving cert
util.go:2250: Getting custom kubeconfig client
util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 3.075s
util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 25ms
util.go:2255: waiting for the KubeAPIDNSName to be reconciled
util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 50ms
util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 25ms
util.go:2267: Finding the external name destination for the KAS Service
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2304: Creating a new KAS Service to be used by the external-dns deployment in CI with the custom DNS name api-custom-cert-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-09T02:18:41Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-09T02:18:51Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-09T02:19:01Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-09T02:19:11Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-09T02:19:21Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-09T02:19:31Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2331: resolved the custom DNS name after 50.104150296s
util.go:2336: Waiting until the KAS Deployment is ready
util.go:2431: Successfully waited for the KAS custom kubeconfig secret to be deleted from HC Namespace in 5.075s
util.go:2447: Successfully waited for the KAS custom kubeconfig secret to be deleted from HCP Namespace in 50ms
util.go:2477: Successfully waited for the KAS custom kubeconfig status to be removed in 50ms
util.go:2510: Deleting custom certificate secret
util.go:2359: Checking CustomAdminKubeconfigs are present
util.go:2396: Checking CustomAdminKubeconfig Infrastructure status is updated
util.go:2397: Successfully waited for a successful connection to the custom DNS guest API server in 50ms
util.go:2465: Checking CustomAdminKubeconfig are removed
util.go:2372: Checking CustomAdminKubeconfig reaches the KAS
util.go:2374: Using extended retry timeout for Azure DNS propagation
util.go:2390: Successfully verified custom kubeconfig can reach KAS
util.go:2351: Checking CustomAdminKubeconfigStatus are present
util.go:2494: Checking CustomAdminKubeconfigStatus are removed
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet konnectivity-agent not ready: 0/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:2041: Waiting for GlobalPullSecretDaemonSet to process the deletion and stabilize all nodes
util.go:2095: Waiting for global-pull-secret-syncer DaemonSet to be ready (using DesiredNumberScheduled)
util.go:2124: DaemonSet global-pull-secret-syncer update in flight: 0/2 pods updated
util.go:2130: DaemonSet global-pull-secret-syncer ready: 1/2 pods ready, rollout complete
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "2tfpmdk40r8dp2qan8do9aqkxz71crerqqjmkmqu0rai" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-9hj9b-create-cluster-rc9fk/2tfpmdk40r8dp2qan8do9aqkxz71crerqqjmkmqu0rai to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "2tfpmdk40r8dp2qan8do9aqkxz71crerqqjmkmqu0rai" to be approved and signed in 25ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:168: creating invalid CSR "2gxe1ctdvxj4nxuvyp2rh7ud69tjyd44e34xekqymwgs" for signer "hypershift.openshift.io/e2e-clusters-9hj9b-create-cluster-rc9fk.customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:178: creating CSRA e2e-clusters-9hj9b-create-cluster-rc9fk/2gxe1ctdvxj4nxuvyp2rh7ud69tjyd44e34xekqymwgs to trigger automatic approval of the CSR
control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "2gxe1ctdvxj4nxuvyp2rh7ud69tjyd44e34xekqymwgs" to have invalid CN exposed in status in 25ms
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-9hj9b-create-cluster-rc9fk/1re7hc207plyye2brvt3rad5cwlqrpkg8wk0bhhlyprf to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-9hj9b-create-cluster-rc9fk/1re7hc207plyye2brvt3rad5cwlqrpkg8wk0bhhlyprf to complete in 2m45.025s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-9hj9b-create-cluster-rc9fk/customer-system-admin-client-cert-key
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:95: generating new break-glass credentials for more than one signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "100iq3e5i5nxpt25dgemjumeqsghyaxrro4rmjnef8t8" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-9hj9b-create-cluster-rc9fk/100iq3e5i5nxpt25dgemjumeqsghyaxrro4rmjnef8t8 to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "100iq3e5i5nxpt25dgemjumeqsghyaxrro4rmjnef8t8" to be approved and signed in 25ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "2wzwcmt69niwu2ug1i5hj3c2ohemy6bde8jpwai7uo54" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-9hj9b-create-cluster-rc9fk/2wzwcmt69niwu2ug1i5hj3c2ohemy6bde8jpwai7uo54 to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "2wzwcmt69niwu2ug1i5hj3c2ohemy6bde8jpwai7uo54" to be approved and signed in 25ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:99: revoking the "customer-break-glass" signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-9hj9b-create-cluster-rc9fk/100iq3e5i5nxpt25dgemjumeqsghyaxrro4rmjnef8t8 to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-9hj9b-create-cluster-rc9fk/100iq3e5i5nxpt25dgemjumeqsghyaxrro4rmjnef8t8 to complete in 3m45.025s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:102: ensuring the break-glass credentials from "sre-break-glass" signer still work
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "d0h2xzadltqfkrhnd1g3krgb21f8l0ranklykbjbyq9" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-9hj9b-create-cluster-rc9fk/d0h2xzadltqfkrhnd1g3krgb21f8l0ranklykbjbyq9 to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "d0h2xzadltqfkrhnd1g3krgb21f8l0ranklykbjbyq9" to be approved and signed in 25ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:168: creating invalid CSR "3xrjejwj5ypcticcrt1wm15dmpy0pakzfxghhtnpkdm" for signer "hypershift.openshift.io/e2e-clusters-9hj9b-create-cluster-rc9fk.sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:178: creating CSRA e2e-clusters-9hj9b-create-cluster-rc9fk/3xrjejwj5ypcticcrt1wm15dmpy0pakzfxghhtnpkdm to trigger automatic approval of the CSR
control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "3xrjejwj5ypcticcrt1wm15dmpy0pakzfxghhtnpkdm" to have invalid CN exposed in status in 25ms
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-9hj9b-create-cluster-rc9fk/5j2z3zo0x7wl8gzi4bobijpkgv7c1h1odz7ycm93zqz to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-9hj9b-create-cluster-rc9fk/5j2z3zo0x7wl8gzi4bobijpkgv7c1h1odz7ycm93zqz to complete in 2m36.025s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-9hj9b-create-cluster-rc9fk/sre-system-admin-client-cert-key
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 2m57.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-rc9fk.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 11.15s
util.go:565: Successfully waited for 2 nodes to become ready in 9m54.075s
util.go:598: Successfully waited for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk to rollout in 3m42.05s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk to have valid conditions in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-9hj9b/create-cluster-rc9fk in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-9hj9b/create-cluster-rc9fk in 150ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-s44hd/custom-config-sx79r in 2m24s
hypershift_framework.go:491: Destroyed cluster. Namespace: e2e-clusters-s44hd, name: custom-config-sx79r
hypershift_framework.go:446: archiving /logs/artifacts/TestCreateClusterCustomConfig/hostedcluster-custom-config-sx79r to /logs/artifacts/TestCreateClusterCustomConfig/hostedcluster.tar.gz
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-s44hd/custom-config-sx79r in 3m0.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-sx79r.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-sx79r.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
util.go:363: Successfully waited for a successful connection to the guest API server in 52.65s
util.go:565: Successfully waited for 2 nodes to become ready in 11m18.075s
util.go:598: Successfully waited for HostedCluster e2e-clusters-s44hd/custom-config-sx79r to rollout in 7m24.075s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-s44hd/custom-config-sx79r to have valid conditions in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-s44hd/custom-config-sx79r in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-gnmjr/node-pool-cqkc7 in 2m5s
nodepool_test.go:150: tests only supported on platform KubeVirt
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-8tpnd/node-pool-gqt6z in 2m18s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-gnmjr/node-pool-cqkc7 in 2m48.075s
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-cqkc7.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 32.5s
util.go:565: Successfully waited for 0 nodes to become ready in 75ms
util.go:2949: Successfully waited for HostedCluster e2e-clusters-gnmjr/node-pool-cqkc7 to have valid conditions in 9m21.075s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8tpnd/node-pool-gqt6z in 2m42.075s
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-gqt6z.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 18.3s
util.go:565: Successfully waited for 0 nodes to become ready in 75ms
util.go:2949: Successfully waited for HostedCluster e2e-clusters-8tpnd/node-pool-gqt6z to have valid conditions in 9m30.05s
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-gnmjr/node-pool-cqkc7 in 100ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-gnmjr/node-pool-cqkc7 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-8tpnd/node-pool-gqt6z in 125ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8tpnd/node-pool-gqt6z in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-gnmjr/node-pool-cqkc7 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
nodepool_kms_root_volume_test.go:39: test only supported on platform AWS
nodepool_autorepair_test.go:42: test only supported on platform AWS
nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-machineconfig in 20m36.075s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-machineconfig to have correct status in 25ms
util.go:474: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-machineconfig to start config update in 15.025s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-ntomachineconfig-inplace in 16m3.1s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-ntomachineconfig-inplace to have correct status in 25ms
util.go:474: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-ntomachineconfig-inplace to start config update in 15.025s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-replaceupgrade in 10m33.05s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-replaceupgrade to have correct status in 50ms
nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints
nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-replaceupgrade to have version 4.22.0-0.ci-2026-01-08-125312 in 25ms
nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-gll2w6iq/release@sha256:45b9a6649d7f4418c1b97767dc4cd2853b7d412de2db90a974eb319999aa510e
nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-replaceupgrade to start the upgrade in 3.05s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-inplaceupgrade in 8m9.05s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-inplaceupgrade to have correct status in 50ms
nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints
nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-inplaceupgrade to have version 4.22.0-0.ci-2026-01-08-125312 in 25ms
nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-gll2w6iq/release@sha256:45b9a6649d7f4418c1b97767dc4cd2853b7d412de2db90a974eb319999aa510e
nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-inplaceupgrade to start the upgrade in 3.025s
nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt
nodepool_day2_tags_test.go:43: test only supported on platform AWS
nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt
nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt
nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt
nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt
nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest
nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack
nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-ck6kt in 20m45.075s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-ck6kt to have correct status in 25ms
nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release.
nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-ck6kt to have correct status in 25ms
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-p6gfm in 14m48.025s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-p6gfm to have correct status in 25ms
nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release.
nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-p6gfm to have correct status in 25ms
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_test.go:348: NodePool version is outside supported skew, validating condition only (skipping node readiness check)
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-mtcwh to have correct status in 6.1s
nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-mirrorconfigs in 10m36.125s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-mirrorconfigs to have correct status in 25ms
nodepool_mirrorconfigs_test.go:81: Entering MirrorConfigs test
nodepool_mirrorconfigs_test.go:111: Hosted control plane namespace is e2e-clusters-gnmjr-node-pool-cqkc7
nodepool_mirrorconfigs_test.go:113: Successfully waited for kubeletConfig should be mirrored and present in the hosted cluster in 3.025s
nodepool_mirrorconfigs_test.go:157: Deleting KubeletConfig configmap reference from nodepool ...
nodepool_mirrorconfigs_test.go:163: Successfully waited for KubeletConfig configmap to be deleted in 3.025s
nodepool_mirrorconfigs_test.go:101: Exiting MirrorConfigs test: OK
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-mirrorconfigs to have correct status in 25ms
nodepool_imagetype_test.go:45: test is only supported for AWS platform
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8tpnd/node-pool-gqt6z in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:363: Successfully waited for a successful connection to the guest API server in 75ms
nodepool_additionalTrustBundlePropagation_test.go:38: Starting AdditionalTrustBundlePropagationTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8tpnd/node-pool-gqt6z-test-additional-trust-bundle-propagation in 6m0.05s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-8tpnd/node-pool-gqt6z-test-additional-trust-bundle-propagation to have correct status in 25ms
nodepool_additionalTrustBundlePropagation_test.go:72: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle
nodepool_additionalTrustBundlePropagation_test.go:80: Successfully waited for Waiting for NodePool e2e-clusters-8tpnd/node-pool-gqt6z-test-additional-trust-bundle-propagation to begin updating in 10.1s
nodepool_additionalTrustBundlePropagation_test.go:94: Successfully waited for Waiting for NodePool e2e-clusters-8tpnd/node-pool-gqt6z-test-additional-trust-bundle-propagation to stop updating in 10m30.025s
nodepool_additionalTrustBundlePropagation_test.go:112: Updating hosted cluster by removing additional trust bundle.
nodepool_additionalTrustBundlePropagation_test.go:126: Successfully waited for Waiting for control plane operator deployment to be updated in 75ms
nodepool_additionalTrustBundlePropagation_test.go:147: Successfully waited for Waiting for NodePool e2e-clusters-8tpnd/node-pool-gqt6z-test-additional-trust-bundle-propagation to begin updating in 10.075s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-gnmjr/node-pool-cqkc7 in 2m5s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-gnmjr/node-pool-cqkc7 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt
nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt
nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt
nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt
nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt
nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest
nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack
nodepool_imagetype_test.go:45: test is only supported for AWS platform
nodepool_kms_root_volume_test.go:39: test only supported on platform AWS
nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-mirrorconfigs in 10m36.125s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-mirrorconfigs to have correct status in 25ms
nodepool_mirrorconfigs_test.go:81: Entering MirrorConfigs test
nodepool_mirrorconfigs_test.go:111: Hosted control plane namespace is e2e-clusters-gnmjr-node-pool-cqkc7
nodepool_mirrorconfigs_test.go:113: Successfully waited for kubeletConfig should be mirrored and present in the hosted cluster in 3.025s
nodepool_mirrorconfigs_test.go:157: Deleting KubeletConfig configmap reference from nodepool ...
nodepool_mirrorconfigs_test.go:163: Successfully waited for KubeletConfig configmap to be deleted in 3.025s
nodepool_mirrorconfigs_test.go:101: Exiting MirrorConfigs test: OK
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-mirrorconfigs to have correct status in 25ms
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-ntomachineconfig-inplace in 16m3.1s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-ntomachineconfig-inplace to have correct status in 25ms
util.go:474: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-ntomachineconfig-inplace to start config update in 15.025s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest
nodepool_autorepair_test.go:42: test only supported on platform AWS
nodepool_day2_tags_test.go:43: test only supported on platform AWS
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-inplaceupgrade in 8m9.05s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-inplaceupgrade to have correct status in 50ms
nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints
nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-inplaceupgrade to have version 4.22.0-0.ci-2026-01-08-125312 in 25ms
nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-gll2w6iq/release@sha256:45b9a6649d7f4418c1b97767dc4cd2853b7d412de2db90a974eb319999aa510e
nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-inplaceupgrade to start the upgrade in 3.025s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-ck6kt in 20m45.075s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-ck6kt to have correct status in 25ms
nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release.
nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-ck6kt to have correct status in 25ms
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-p6gfm in 14m48.025s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-p6gfm to have correct status in 25ms
nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release.
nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-p6gfm to have correct status in 25ms
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_test.go:348: NodePool version is outside supported skew, validating condition only (skipping node readiness check)
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-mtcwh to have correct status in 6.1s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-replaceupgrade in 10m33.05s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-replaceupgrade to have correct status in 50ms
nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints
nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-replaceupgrade to have version 4.22.0-0.ci-2026-01-08-125312 in 25ms
nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-gll2w6iq/release@sha256:45b9a6649d7f4418c1b97767dc4cd2853b7d412de2db90a974eb319999aa510e
nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-replaceupgrade to start the upgrade in 3.05s
nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-machineconfig in 20m36.075s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-machineconfig to have correct status in 25ms
util.go:474: Successfully waited for NodePool e2e-clusters-gnmjr/node-pool-cqkc7-test-machineconfig to start config update in 15.025s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-gnmjr/node-pool-cqkc7 in 2m48.075s
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-cqkc7.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 32.5s
util.go:565: Successfully waited for 0 nodes to become ready in 75ms
util.go:2949: Successfully waited for HostedCluster e2e-clusters-gnmjr/node-pool-cqkc7 to have valid conditions in 9m21.075s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-gnmjr/node-pool-cqkc7 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-gnmjr/node-pool-cqkc7 in 100ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
nodepool_test.go:150: tests only supported on platform KubeVirt
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-8tpnd/node-pool-gqt6z in 2m18s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8tpnd/node-pool-gqt6z in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:363: Successfully waited for a successful connection to the guest API server in 75ms
nodepool_additionalTrustBundlePropagation_test.go:38: Starting AdditionalTrustBundlePropagationTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8tpnd/node-pool-gqt6z-test-additional-trust-bundle-propagation in 6m0.05s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-8tpnd/node-pool-gqt6z-test-additional-trust-bundle-propagation to have correct status in 25ms
nodepool_additionalTrustBundlePropagation_test.go:72: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle
nodepool_additionalTrustBundlePropagation_test.go:80: Successfully waited for Waiting for NodePool e2e-clusters-8tpnd/node-pool-gqt6z-test-additional-trust-bundle-propagation to begin updating in 10.1s
nodepool_additionalTrustBundlePropagation_test.go:94: Successfully waited for Waiting for NodePool e2e-clusters-8tpnd/node-pool-gqt6z-test-additional-trust-bundle-propagation to stop updating in 10m30.025s
nodepool_additionalTrustBundlePropagation_test.go:112: Updating hosted cluster by removing additional trust bundle.
nodepool_additionalTrustBundlePropagation_test.go:126: Successfully waited for Waiting for control plane operator deployment to be updated in 75ms
nodepool_additionalTrustBundlePropagation_test.go:147: Successfully waited for Waiting for NodePool e2e-clusters-8tpnd/node-pool-gqt6z-test-additional-trust-bundle-propagation to begin updating in 10.075s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8tpnd/node-pool-gqt6z in 2m42.075s
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-gqt6z.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 18.3s
util.go:565: Successfully waited for 0 nodes to become ready in 75ms
util.go:2949: Successfully waited for HostedCluster e2e-clusters-8tpnd/node-pool-gqt6z to have valid conditions in 9m30.05s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8tpnd/node-pool-gqt6z in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-8tpnd/node-pool-gqt6z in 125ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
control_plane_upgrade_test.go:25: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-gll2w6iq/release@sha256:1791cec1bd6882825904d2d2c135d668576192bfe610f267741116db9795d984, toImage: registry.build01.ci.openshift.org/ci-op-gll2w6iq/release@sha256:45b9a6649d7f4418c1b97767dc4cd2853b7d412de2db90a974eb319999aa510e
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-v9p7r/control-plane-upgrade-rk2kg in 2m25s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v9p7r/control-plane-upgrade-rk2kg in 3m24.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-rk2kg.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-rk2kg.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-rk2kg.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-rk2kg.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
util.go:363: Successfully waited for a successful connection to the guest API server in 41.8s
util.go:565: Successfully waited for 2 nodes to become ready in 9m0.075s
util.go:598: Successfully waited for HostedCluster e2e-clusters-v9p7r/control-plane-upgrade-rk2kg to rollout in 7m36.075s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-v9p7r/control-plane-upgrade-rk2kg to have valid conditions in 50ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-v9p7r/control-plane-upgrade-rk2kg in 200ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v9p7r/control-plane-upgrade-rk2kg in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v9p7r/control-plane-upgrade-rk2kg in 75ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v9p7r/control-plane-upgrade-rk2kg in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:363: Successfully waited for a successful connection to the guest API server in 50ms
control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-gll2w6iq/release@sha256:45b9a6649d7f4418c1b97767dc4cd2853b7d412de2db90a974eb319999aa510e
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v9p7r/control-plane-upgrade-rk2kg in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:363: Successfully waited for a successful connection to the guest API server in 50ms
control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-gll2w6iq/release@sha256:45b9a6649d7f4418c1b97767dc4cd2853b7d412de2db90a974eb319999aa510e
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v9p7r/control-plane-upgrade-rk2kg in 3m24.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-rk2kg.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-rk2kg.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-rk2kg.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-rk2kg.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
util.go:363: Successfully waited for a successful connection to the guest API server in 41.8s
util.go:565: Successfully waited for 2 nodes to become ready in 9m0.075s
util.go:598: Successfully waited for HostedCluster e2e-clusters-v9p7r/control-plane-upgrade-rk2kg to rollout in 7m36.075s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-v9p7r/control-plane-upgrade-rk2kg to have valid conditions in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v9p7r/control-plane-upgrade-rk2kg in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-v9p7r/control-plane-upgrade-rk2kg in 75ms
util.go:301: Successfully waited for kubeconfig secret to have data in 25ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-v9p7r/control-plane-upgrade-rk2kg in 200ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster