Failed Tests
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-27v8l/autoscaling-lkt58 in 2m31s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-27v8l/autoscaling-lkt58 in 4m39.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-lkt58.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-lkt58.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-lkt58.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-lkt58.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-lkt58.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-lkt58.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
util.go:363: Successfully waited for a successful connection to the guest API server in 50.875s
util.go:565: Successfully waited for 1 nodes to become ready in 14m24.075s
util.go:598: Successfully waited for HostedCluster e2e-clusters-27v8l/autoscaling-lkt58 to rollout in 3m57.05s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-27v8l/autoscaling-lkt58 to have valid conditions in 50ms
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-27v8l/autoscaling-lkt58 in 150ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-27v8l/autoscaling-lkt58 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-27v8l/autoscaling-lkt58 in 75ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-27v8l/autoscaling-lkt58 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:565: Successfully waited for 1 nodes to become ready in 75ms
autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-27v8l, name: autoscaling-lkt58, min: 1, max: 3
autoscaling_test.go:137: Created workload. Node: autoscaling-lkt58-tfblr-glzjh, memcapacity: 15219268Ki
util.go:565: Successfully waited for 3 nodes to become ready in 5m42.1s
autoscaling_test.go:157: Deleted workload
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-27v8l/autoscaling-lkt58 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:565: Successfully waited for 1 nodes to become ready in 75ms
autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-27v8l, name: autoscaling-lkt58, min: 1, max: 3
autoscaling_test.go:137: Created workload. Node: autoscaling-lkt58-tfblr-glzjh, memcapacity: 15219268Ki
util.go:565: Successfully waited for 3 nodes to become ready in 5m42.1s
autoscaling_test.go:157: Deleted workload
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-27v8l/autoscaling-lkt58 in 4m39.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-lkt58.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-lkt58.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-lkt58.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-lkt58.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-lkt58.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-lkt58.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
util.go:363: Successfully waited for a successful connection to the guest API server in 50.875s
util.go:565: Successfully waited for 1 nodes to become ready in 14m24.075s
util.go:598: Successfully waited for HostedCluster e2e-clusters-27v8l/autoscaling-lkt58 to rollout in 3m57.05s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-27v8l/autoscaling-lkt58 to have valid conditions in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-27v8l/autoscaling-lkt58 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-27v8l/autoscaling-lkt58 in 75ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-27v8l/autoscaling-lkt58 in 150ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-r869g/azure-scheduler-vqj94 in 2m19s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-r869g/azure-scheduler-vqj94 in 4m0.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-vqj94.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 13.275s
util.go:565: Successfully waited for 2 nodes to become ready in 14m12.1s
util.go:598: Successfully waited for HostedCluster e2e-clusters-r869g/azure-scheduler-vqj94 to rollout in 8m27.05s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-r869g/azure-scheduler-vqj94 to have valid conditions in 50ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-r869g/azure-scheduler-vqj94 in 200ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-r869g/azure-scheduler-vqj94 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-r869g/azure-scheduler-vqj94 in 75ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-r869g/azure-scheduler-vqj94 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 50ms
util.go:565: Successfully waited for 2 nodes to become ready in 75ms
azure_scheduler_test.go:110: Updated clusterSizingConfig.
azure_scheduler_test.go:157: Successfully waited for HostedCluster size label and annotations updated in 50ms
azure_scheduler_test.go:149: Scaled Nodepool. Namespace: e2e-clusters-r869g, name: azure-scheduler-vqj94, replicas: 0xc0036ac460
util.go:565: Successfully waited for 3 nodes to become ready in 5m6.1s
azure_scheduler_test.go:157: Successfully waited for HostedCluster size label and annotations updated in 75ms
azure_scheduler_test.go:181: Successfully waited for control-plane-operator pod is running with expected resource request in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-r869g/azure-scheduler-vqj94 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:3224: Successfully waited for HostedCluster e2e-clusters-r869g/azure-scheduler-vqj94 to have valid Status.Payload in 100ms
util.go:1156: test only supported on AWS platform, saw Azure
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:3917: All 46 pods in namespace e2e-clusters-r869g-azure-scheduler-vqj94 have the expected RunAsUser UID 1004
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-r869g/azure-scheduler-vqj94 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
util.go:1156: test only supported on AWS platform, saw Azure
util.go:3224: Successfully waited for HostedCluster e2e-clusters-r869g/azure-scheduler-vqj94 to have valid Status.Payload in 100ms
util.go:3917: All 46 pods in namespace e2e-clusters-r869g-azure-scheduler-vqj94 have the expected RunAsUser UID 1004
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-r869g/azure-scheduler-vqj94 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 50ms
util.go:565: Successfully waited for 2 nodes to become ready in 75ms
azure_scheduler_test.go:110: Updated clusterSizingConfig.
azure_scheduler_test.go:157: Successfully waited for HostedCluster size label and annotations updated in 50ms
azure_scheduler_test.go:149: Scaled Nodepool. Namespace: e2e-clusters-r869g, name: azure-scheduler-vqj94, replicas: 0xc0036ac460
util.go:565: Successfully waited for 3 nodes to become ready in 5m6.1s
azure_scheduler_test.go:157: Successfully waited for HostedCluster size label and annotations updated in 75ms
azure_scheduler_test.go:181: Successfully waited for control-plane-operator pod is running with expected resource request in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-r869g/azure-scheduler-vqj94 in 4m0.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-vqj94.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 13.275s
util.go:565: Successfully waited for 2 nodes to become ready in 14m12.1s
util.go:598: Successfully waited for HostedCluster e2e-clusters-r869g/azure-scheduler-vqj94 to rollout in 8m27.05s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-r869g/azure-scheduler-vqj94 to have valid conditions in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-r869g/azure-scheduler-vqj94 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-r869g/azure-scheduler-vqj94 in 75ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-r869g/azure-scheduler-vqj94 in 200ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-n4kkb/create-cluster-dmxkj in 2m27s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 4m6.075s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-dmxkj.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 10.2s
util.go:565: Successfully waited for 2 nodes to become ready in 14m6.075s
util.go:598: Successfully waited for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj to rollout in 4m57.075s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj to have valid conditions in 50ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-n4kkb/create-cluster-dmxkj in 175ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
create_cluster_test.go:2532: fetching mgmt kubeconfig
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 100ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 225ms
util.go:1928: NodePool replicas: 2, Available nodes: 2
util.go:2021: Deleting the additional-pull-secret secret in the DataPlane
control_plane_pki_operator.go:95: generating new break-glass credentials for more than one signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "21498v98vr7fbla055bt5qlkxq9wskl67rve5t815ugo" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-n4kkb-create-cluster-dmxkj/21498v98vr7fbla055bt5qlkxq9wskl67rve5t815ugo to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "21498v98vr7fbla055bt5qlkxq9wskl67rve5t815ugo" to be approved and signed in 50ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "2c4yeaawts6gzerwmr2rqg9fvh0vcj7vtzrt3esmwp08" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-n4kkb-create-cluster-dmxkj/2c4yeaawts6gzerwmr2rqg9fvh0vcj7vtzrt3esmwp08 to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "2c4yeaawts6gzerwmr2rqg9fvh0vcj7vtzrt3esmwp08" to be approved and signed in 50ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:99: revoking the "customer-break-glass" signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-n4kkb-create-cluster-dmxkj/21498v98vr7fbla055bt5qlkxq9wskl67rve5t815ugo to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-n4kkb-create-cluster-dmxkj/21498v98vr7fbla055bt5qlkxq9wskl67rve5t815ugo to complete in 3m48.05s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:102: ensuring the break-glass credentials from "sre-break-glass" signer still work
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-n4kkb-create-cluster-dmxkj/customer-system-admin-client-cert-key
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-n4kkb-create-cluster-dmxkj/sre-system-admin-client-cert-key
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "1jvnhuo33w86s5vsqjsa8hlnbzim7o2fuhzi4mxaluqw" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-n4kkb-create-cluster-dmxkj/1jvnhuo33w86s5vsqjsa8hlnbzim7o2fuhzi4mxaluqw to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "1jvnhuo33w86s5vsqjsa8hlnbzim7o2fuhzi4mxaluqw" to be approved and signed in 50ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:168: creating invalid CSR "27sggz7jajkqzsu15znu49jtdtg3m86nwkyxv3f7nrsv" for signer "hypershift.openshift.io/e2e-clusters-n4kkb-create-cluster-dmxkj.sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:178: creating CSRA e2e-clusters-n4kkb-create-cluster-dmxkj/27sggz7jajkqzsu15znu49jtdtg3m86nwkyxv3f7nrsv to trigger automatic approval of the CSR
control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "27sggz7jajkqzsu15znu49jtdtg3m86nwkyxv3f7nrsv" to have invalid CN exposed in status in 50ms
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "2st2d85ln26tlgu6w24qodyq0dhovmbpetc062ci8jk1" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-n4kkb-create-cluster-dmxkj/2st2d85ln26tlgu6w24qodyq0dhovmbpetc062ci8jk1 to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "2st2d85ln26tlgu6w24qodyq0dhovmbpetc062ci8jk1" to be approved and signed in 50ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:168: creating invalid CSR "1qj2p0n64l4sbtjht8cc2wthgrzwmgnjgx8yx977k9hx" for signer "hypershift.openshift.io/e2e-clusters-n4kkb-create-cluster-dmxkj.customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:178: creating CSRA e2e-clusters-n4kkb-create-cluster-dmxkj/1qj2p0n64l4sbtjht8cc2wthgrzwmgnjgx8yx977k9hx to trigger automatic approval of the CSR
control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "1qj2p0n64l4sbtjht8cc2wthgrzwmgnjgx8yx977k9hx" to have invalid CN exposed in status in 50ms
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-n4kkb-create-cluster-dmxkj/pips56tfatmvh1tuz1hny7rvlchiv594bcoihogrvsh to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-n4kkb-create-cluster-dmxkj/pips56tfatmvh1tuz1hny7rvlchiv594bcoihogrvsh to complete in 2m9.05s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-n4kkb-create-cluster-dmxkj/1whl85x67dob5qigyi2qesngc1ulnlb89n0xidbqtxoq to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-n4kkb-create-cluster-dmxkj/1whl85x67dob5qigyi2qesngc1ulnlb89n0xidbqtxoq to complete in 2m12.05s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
util.go:169: failed to patch object create-cluster-dmxkj, will retry: HostedCluster.hypershift.openshift.io "create-cluster-dmxkj" is invalid: [spec: Invalid value: "object": Services is immutable. Changes might result in unpredictable and disruptive behavior., spec.services[0].servicePublishingStrategy: Invalid value: "object": nodePort is required when type is NodePort, and forbidden otherwise, spec.services[0].servicePublishingStrategy: Invalid value: "object": only route is allowed when type is Route, and forbidden otherwise]
util.go:169: failed to patch object create-cluster-dmxkj, will retry: HostedCluster.hypershift.openshift.io "create-cluster-dmxkj" is invalid: spec.controllerAvailabilityPolicy: Invalid value: "string": ControllerAvailabilityPolicy is immutable
util.go:169: failed to patch object create-cluster-dmxkj, will retry: HostedCluster.hypershift.openshift.io "create-cluster-dmxkj" is invalid: spec.capabilities: Invalid value: "object": Capabilities is immutable. Changes might result in unpredictable and disruptive behavior.
util.go:2184: Using Azure-specific retry strategy for DNS propagation race condition
util.go:2193: Generating custom certificate with DNS name api-custom-cert-create-cluster-dmxkj.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2198: Creating custom certificate secret
util.go:2214: Updating hosted cluster with KubeAPIDNSName and KAS custom serving cert
util.go:2250: Getting custom kubeconfig client
util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 3.05s
util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 50ms
util.go:2255: waiting for the KubeAPIDNSName to be reconciled
util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 50ms
util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 50ms
util.go:2267: Finding the external name destination for the KAS Service
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2304: Creating a new KAS Service to be used by the external-dns deployment in CI with the custom DNS name api-custom-cert-create-cluster-dmxkj.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-12T14:41:09Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-dmxkj.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-12T14:41:19Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-dmxkj.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2331: resolved the custom DNS name after 10.029719608s
util.go:2336: Waiting until the KAS Deployment is ready
util.go:2431: Successfully waited for the KAS custom kubeconfig secret to be deleted from HC Namespace in 5.075s
util.go:2447: Successfully waited for the KAS custom kubeconfig secret to be deleted from HCP Namespace in 50ms
util.go:2477: Successfully waited for the KAS custom kubeconfig status to be removed in 50ms
util.go:2510: Deleting custom certificate secret
util.go:2351: Checking CustomAdminKubeconfigStatus are present
util.go:2359: Checking CustomAdminKubeconfigs are present
util.go:2372: Checking CustomAdminKubeconfig reaches the KAS
util.go:2374: Using extended retry timeout for Azure DNS propagation
util.go:2390: Successfully verified custom kubeconfig can reach KAS
util.go:2396: Checking CustomAdminKubeconfig Infrastructure status is updated
util.go:2397: Successfully waited for a successful connection to the custom DNS guest API server in 50ms
util.go:2465: Checking CustomAdminKubeconfig are removed
util.go:2494: Checking CustomAdminKubeconfigStatus are removed
util.go:3352: This test is only applicable for AWS platform
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2109: DaemonSet global-pull-secret-syncer status has not observed generation 2 yet (current 1)
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:3748: Creating a pod which uses the restricted image
util.go:3773: Attempt 1/3: Creating pod
util.go:3778: Successfully created pod global-pull-secret-fail-pod in namespace kube-system on attempt 1
util.go:3799: Created pod global-pull-secret-fail-pod in namespace kube-system
util.go:3825: Pod is in the desired state, deleting it now
util.go:3828: Deleted the pod
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:3748: Creating a pod which uses the restricted image
util.go:3773: Attempt 1/3: Creating pod
util.go:3778: Successfully created pod global-pull-secret-success-pod in namespace kube-system on attempt 1
util.go:3799: Created pod global-pull-secret-success-pod in namespace kube-system
util.go:3816: Pod phase: Pending, shouldFail: false
util.go:3816: Pod phase: Pending, shouldFail: false
util.go:3816: Pod phase: Running, shouldFail: false
util.go:3820: Pod is running! Continuing...
util.go:3825: Pod is in the desired state, deleting it now
util.go:3828: Deleted the pod
util.go:2041: Waiting for GlobalPullSecretDaemonSet to process the deletion and stabilize all nodes
util.go:2095: Waiting for global-pull-secret-syncer DaemonSet to be ready (using DesiredNumberScheduled)
util.go:2124: DaemonSet global-pull-secret-syncer update in flight: 0/2 pods updated
util.go:2130: DaemonSet global-pull-secret-syncer ready: 2/2 pods ready, rollout complete
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
globalps.go:209: Creating kubelet config verifier DaemonSet
globalps.go:214: Waiting for OVN, GlobalPullSecret, Konnectivity and kubelet config verifier DaemonSets to be ready
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet ovnkube-node not ready: 0/2 pods ready
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:2095: Waiting for kubelet-config-verifier DaemonSet to be ready (using DesiredNumberScheduled)
util.go:2130: DaemonSet kubelet-config-verifier ready: 0/2 pods ready, rollout complete
util.go:2147: ✓ kubelet-config-verifier DaemonSet is ready
globalps.go:229: Cleaning up kubelet config verifier DaemonSet
util_ingress_operator_configuration.go:28: Verifying HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj has custom Ingress Operator endpointPublishingStrategy
util_ingress_operator_configuration.go:37: Validating IngressController in guest cluster reflects the custom endpointPublishingStrategy
util_ingress_operator_configuration.go:38: Successfully waited for IngressController default in guest cluster to reflect the custom endpointPublishingStrategy in 100ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 50ms
util.go:3224: Successfully waited for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj to have valid Status.Payload in 125ms
util.go:1156: test only supported on AWS platform, saw Azure
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:3917: All 45 pods in namespace e2e-clusters-n4kkb-create-cluster-dmxkj have the expected RunAsUser UID 1006
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 50ms
util.go:1156: test only supported on AWS platform, saw Azure
util.go:3224: Successfully waited for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj to have valid Status.Payload in 125ms
util.go:3917: All 45 pods in namespace e2e-clusters-n4kkb-create-cluster-dmxkj have the expected RunAsUser UID 1006
util.go:2553: Checking Denied KAS Requests for ValidatingAdmissionPolicies
util.go:2569: Checking ClusterOperator status modifications are allowed
util.go:2527: Checking that all ValidatingAdmissionPolicies are present
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
create_cluster_test.go:2532: fetching mgmt kubeconfig
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 100ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 225ms
util.go:1928: NodePool replicas: 2, Available nodes: 2
util.go:2021: Deleting the additional-pull-secret secret in the DataPlane
globalps.go:209: Creating kubelet config verifier DaemonSet
globalps.go:214: Waiting for OVN, GlobalPullSecret, Konnectivity and kubelet config verifier DaemonSets to be ready
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet ovnkube-node not ready: 0/2 pods ready
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:2095: Waiting for kubelet-config-verifier DaemonSet to be ready (using DesiredNumberScheduled)
util.go:2130: DaemonSet kubelet-config-verifier ready: 0/2 pods ready, rollout complete
util.go:2147: ✓ kubelet-config-verifier DaemonSet is ready
globalps.go:229: Cleaning up kubelet config verifier DaemonSet
util.go:3748: Creating a pod which uses the restricted image
util.go:3773: Attempt 1/3: Creating pod
util.go:3778: Successfully created pod global-pull-secret-fail-pod in namespace kube-system on attempt 1
util.go:3799: Created pod global-pull-secret-fail-pod in namespace kube-system
util.go:3825: Pod is in the desired state, deleting it now
util.go:3828: Deleted the pod
util.go:3748: Creating a pod which uses the restricted image
util.go:3773: Attempt 1/3: Creating pod
util.go:3778: Successfully created pod global-pull-secret-success-pod in namespace kube-system on attempt 1
util.go:3799: Created pod global-pull-secret-success-pod in namespace kube-system
util.go:3816: Pod phase: Pending, shouldFail: false
util.go:3816: Pod phase: Pending, shouldFail: false
util.go:3816: Pod phase: Running, shouldFail: false
util.go:3820: Pod is running! Continuing...
util.go:3825: Pod is in the desired state, deleting it now
util.go:3828: Deleted the pod
util.go:3352: This test is only applicable for AWS platform
util.go:169: failed to patch object create-cluster-dmxkj, will retry: HostedCluster.hypershift.openshift.io "create-cluster-dmxkj" is invalid: spec.capabilities: Invalid value: "object": Capabilities is immutable. Changes might result in unpredictable and disruptive behavior.
util.go:169: failed to patch object create-cluster-dmxkj, will retry: HostedCluster.hypershift.openshift.io "create-cluster-dmxkj" is invalid: [spec: Invalid value: "object": Services is immutable. Changes might result in unpredictable and disruptive behavior., spec.services[0].servicePublishingStrategy: Invalid value: "object": nodePort is required when type is NodePort, and forbidden otherwise, spec.services[0].servicePublishingStrategy: Invalid value: "object": only route is allowed when type is Route, and forbidden otherwise]
util.go:169: failed to patch object create-cluster-dmxkj, will retry: HostedCluster.hypershift.openshift.io "create-cluster-dmxkj" is invalid: spec.controllerAvailabilityPolicy: Invalid value: "string": ControllerAvailabilityPolicy is immutable
util_ingress_operator_configuration.go:28: Verifying HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj has custom Ingress Operator endpointPublishingStrategy
util_ingress_operator_configuration.go:37: Validating IngressController in guest cluster reflects the custom endpointPublishingStrategy
util_ingress_operator_configuration.go:38: Successfully waited for IngressController default in guest cluster to reflect the custom endpointPublishingStrategy in 100ms
util.go:2184: Using Azure-specific retry strategy for DNS propagation race condition
util.go:2193: Generating custom certificate with DNS name api-custom-cert-create-cluster-dmxkj.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2198: Creating custom certificate secret
util.go:2214: Updating hosted cluster with KubeAPIDNSName and KAS custom serving cert
util.go:2250: Getting custom kubeconfig client
util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 3.05s
util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 50ms
util.go:2255: waiting for the KubeAPIDNSName to be reconciled
util.go:247: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 50ms
util.go:264: Successfully waited for KAS custom kubeconfig secret to have data in 50ms
util.go:2267: Finding the external name destination for the KAS Service
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2292: service custom DNS name not found, using the control plane endpoint
util.go:2304: Creating a new KAS Service to be used by the external-dns deployment in CI with the custom DNS name api-custom-cert-create-cluster-dmxkj.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-12T14:41:09Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-dmxkj.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2326: [2026-01-12T14:41:19Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-dmxkj.aks-e2e.hypershift.azure.devcluster.openshift.com
util.go:2331: resolved the custom DNS name after 10.029719608s
util.go:2336: Waiting until the KAS Deployment is ready
util.go:2431: Successfully waited for the KAS custom kubeconfig secret to be deleted from HC Namespace in 5.075s
util.go:2447: Successfully waited for the KAS custom kubeconfig secret to be deleted from HCP Namespace in 50ms
util.go:2477: Successfully waited for the KAS custom kubeconfig status to be removed in 50ms
util.go:2510: Deleting custom certificate secret
util.go:2359: Checking CustomAdminKubeconfigs are present
util.go:2396: Checking CustomAdminKubeconfig Infrastructure status is updated
util.go:2397: Successfully waited for a successful connection to the custom DNS guest API server in 50ms
util.go:2465: Checking CustomAdminKubeconfig are removed
util.go:2372: Checking CustomAdminKubeconfig reaches the KAS
util.go:2374: Using extended retry timeout for Azure DNS propagation
util.go:2390: Successfully verified custom kubeconfig can reach KAS
util.go:2351: Checking CustomAdminKubeconfigStatus are present
util.go:2494: Checking CustomAdminKubeconfigStatus are removed
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2109: DaemonSet global-pull-secret-syncer status has not observed generation 2 yet (current 1)
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 0/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2135: DaemonSet global-pull-secret-syncer not ready: 1/2 pods ready
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2135: DaemonSet konnectivity-agent not ready: 1/2 pods ready
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:2097: Waiting for ovnkube-node DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet ovnkube-node ready: 2/2 pods
util.go:2147: ✓ ovnkube-node DaemonSet is ready
util.go:2097: Waiting for global-pull-secret-syncer DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet global-pull-secret-syncer ready: 2/2 pods
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
util.go:2097: Waiting for konnectivity-agent DaemonSet to be ready with 2 nodes
util.go:2139: DaemonSet konnectivity-agent ready: 2/2 pods
util.go:2147: ✓ konnectivity-agent DaemonSet is ready
util.go:2041: Waiting for GlobalPullSecretDaemonSet to process the deletion and stabilize all nodes
util.go:2095: Waiting for global-pull-secret-syncer DaemonSet to be ready (using DesiredNumberScheduled)
util.go:2124: DaemonSet global-pull-secret-syncer update in flight: 0/2 pods updated
util.go:2130: DaemonSet global-pull-secret-syncer ready: 2/2 pods ready, rollout complete
util.go:2147: ✓ global-pull-secret-syncer DaemonSet is ready
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "2st2d85ln26tlgu6w24qodyq0dhovmbpetc062ci8jk1" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-n4kkb-create-cluster-dmxkj/2st2d85ln26tlgu6w24qodyq0dhovmbpetc062ci8jk1 to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "2st2d85ln26tlgu6w24qodyq0dhovmbpetc062ci8jk1" to be approved and signed in 50ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:168: creating invalid CSR "1qj2p0n64l4sbtjht8cc2wthgrzwmgnjgx8yx977k9hx" for signer "hypershift.openshift.io/e2e-clusters-n4kkb-create-cluster-dmxkj.customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:178: creating CSRA e2e-clusters-n4kkb-create-cluster-dmxkj/1qj2p0n64l4sbtjht8cc2wthgrzwmgnjgx8yx977k9hx to trigger automatic approval of the CSR
control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "1qj2p0n64l4sbtjht8cc2wthgrzwmgnjgx8yx977k9hx" to have invalid CN exposed in status in 50ms
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-n4kkb-create-cluster-dmxkj/1whl85x67dob5qigyi2qesngc1ulnlb89n0xidbqtxoq to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-n4kkb-create-cluster-dmxkj/1whl85x67dob5qigyi2qesngc1ulnlb89n0xidbqtxoq to complete in 2m12.05s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-n4kkb-create-cluster-dmxkj/customer-system-admin-client-cert-key
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:95: generating new break-glass credentials for more than one signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "21498v98vr7fbla055bt5qlkxq9wskl67rve5t815ugo" for signer "customer-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-n4kkb-create-cluster-dmxkj/21498v98vr7fbla055bt5qlkxq9wskl67rve5t815ugo to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "21498v98vr7fbla055bt5qlkxq9wskl67rve5t815ugo" to be approved and signed in 50ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "2c4yeaawts6gzerwmr2rqg9fvh0vcj7vtzrt3esmwp08" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-n4kkb-create-cluster-dmxkj/2c4yeaawts6gzerwmr2rqg9fvh0vcj7vtzrt3esmwp08 to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "2c4yeaawts6gzerwmr2rqg9fvh0vcj7vtzrt3esmwp08" to be approved and signed in 50ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
control_plane_pki_operator.go:99: revoking the "customer-break-glass" signer
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-n4kkb-create-cluster-dmxkj/21498v98vr7fbla055bt5qlkxq9wskl67rve5t815ugo to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-n4kkb-create-cluster-dmxkj/21498v98vr7fbla055bt5qlkxq9wskl67rve5t815ugo to complete in 3m48.05s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:102: ensuring the break-glass credentials from "sre-break-glass" signer still work
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:204: creating CSR "1jvnhuo33w86s5vsqjsa8hlnbzim7o2fuhzi4mxaluqw" for signer "sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:214: creating CSRA e2e-clusters-n4kkb-create-cluster-dmxkj/1jvnhuo33w86s5vsqjsa8hlnbzim7o2fuhzi4mxaluqw to trigger automatic approval of the CSR
control_plane_pki_operator.go:221: Successfully waited for CSR "1jvnhuo33w86s5vsqjsa8hlnbzim7o2fuhzi4mxaluqw" to be approved and signed in 50ms
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:168: creating invalid CSR "27sggz7jajkqzsu15znu49jtdtg3m86nwkyxv3f7nrsv" for signer "hypershift.openshift.io/e2e-clusters-n4kkb-create-cluster-dmxkj.sre-break-glass", requesting client auth usages
control_plane_pki_operator.go:178: creating CSRA e2e-clusters-n4kkb-create-cluster-dmxkj/27sggz7jajkqzsu15znu49jtdtg3m86nwkyxv3f7nrsv to trigger automatic approval of the CSR
control_plane_pki_operator.go:184: Successfully waited for waiting for CSR "27sggz7jajkqzsu15znu49jtdtg3m86nwkyxv3f7nrsv" to have invalid CN exposed in status in 50ms
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones
control_plane_pki_operator.go:256: creating CRR e2e-clusters-n4kkb-create-cluster-dmxkj/pips56tfatmvh1tuz1hny7rvlchiv594bcoihogrvsh to trigger signer certificate revocation
control_plane_pki_operator.go:263: Successfully waited for CRR e2e-clusters-n4kkb-create-cluster-dmxkj/pips56tfatmvh1tuz1hny7rvlchiv594bcoihogrvsh to complete in 2m9.05s
control_plane_pki_operator.go:276: creating a client using the a certificate from the revoked signer
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:279: issuing SSR to confirm that we're not authorized to contact the server
control_plane_pki_operator.go:66: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-n4kkb-create-cluster-dmxkj/sre-system-admin-client-cert-key
control_plane_pki_operator.go:133: validating that the client certificate provides the appropriate access
control_plane_pki_operator.go:119: amending the existing kubeconfig to use break-glass client certificate credentials
control_plane_pki_operator.go:136: issuing SSR to identify the subject we are given using the client certificate
control_plane_pki_operator.go:156: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 4m6.075s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-dmxkj.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 10.2s
util.go:565: Successfully waited for 2 nodes to become ready in 14m6.075s
util.go:598: Successfully waited for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj to rollout in 4m57.075s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj to have valid conditions in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-n4kkb/create-cluster-dmxkj in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-n4kkb/create-cluster-dmxkj in 175ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-8hxr4/custom-config-lrh89 in 2m24s
hypershift_framework.go:491: Destroyed cluster. Namespace: e2e-clusters-8hxr4, name: custom-config-lrh89
hypershift_framework.go:446: archiving /logs/artifacts/TestCreateClusterCustomConfig/hostedcluster-custom-config-lrh89 to /logs/artifacts/TestCreateClusterCustomConfig/hostedcluster.tar.gz
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8hxr4/custom-config-lrh89 in 3m27.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrh89.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrh89.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-lrh89.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 54.875s
util.go:565: Successfully waited for 2 nodes to become ready in 14m15.15s
util.go:598: Successfully waited for HostedCluster e2e-clusters-8hxr4/custom-config-lrh89 to rollout in 7m57.05s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-8hxr4/custom-config-lrh89 to have valid conditions in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8hxr4/custom-config-lrh89 in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 50ms
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-klhjp/node-pool-xkr5j in 2m12s
nodepool_test.go:150: tests only supported on platform KubeVirt
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-mmjll/node-pool-rrw7w in 2m15s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-klhjp/node-pool-xkr5j in 3m36.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-xkr5j.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 44.425s
util.go:565: Successfully waited for 0 nodes to become ready in 100ms
util.go:2949: Successfully waited for HostedCluster e2e-clusters-klhjp/node-pool-xkr5j to have valid conditions in 12m30.05s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mmjll/node-pool-rrw7w in 4m27.125s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-rrw7w.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-rrw7w.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
util.go:363: Successfully waited for a successful connection to the guest API server in 17.45s
util.go:565: Successfully waited for 0 nodes to become ready in 75ms
util.go:2949: Successfully waited for HostedCluster e2e-clusters-mmjll/node-pool-rrw7w to have valid conditions in 12m6.075s
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-klhjp/node-pool-xkr5j in 175ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-klhjp/node-pool-xkr5j in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-mmjll/node-pool-rrw7w in 425ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mmjll/node-pool-rrw7w in 75ms
util.go:301: Successfully waited for kubeconfig secret to have data in 75ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-klhjp/node-pool-xkr5j in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
nodepool_kms_root_volume_test.go:39: test only supported on platform AWS
nodepool_autorepair_test.go:42: test only supported on platform AWS
nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-replaceupgrade in 17m36.1s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-replaceupgrade to have correct status in 50ms
nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints
nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-replaceupgrade to have version 4.22.0-0.ci-2026-01-12-010229 in 50ms
nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-p0lv2gxr/release@sha256:03e81c806230ad0b09e484ef1bcbd0c6e9243a1b040e172d08bf314be0d79380
nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-replaceupgrade to start the upgrade in 3.05s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-inplaceupgrade in 10m42.125s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-inplaceupgrade to have correct status in 50ms
nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints
nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-inplaceupgrade to have version 4.22.0-0.ci-2026-01-12-010229 in 50ms
nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-p0lv2gxr/release@sha256:03e81c806230ad0b09e484ef1bcbd0c6e9243a1b040e172d08bf314be0d79380
nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-inplaceupgrade to start the upgrade in 3.05s
nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt
nodepool_day2_tags_test.go:43: test only supported on platform AWS
nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt
nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt
nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt
nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt
nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest
nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack
nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-klhjp/node-pool-xkr5j-x4f7r in 13m18.075s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-x4f7r to have correct status in 50ms
nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release.
nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-x4f7r to have correct status in 50ms
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_test.go:348: NodePool version is outside supported skew, validating condition only (skipping node readiness check)
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-fnjmn to have correct status in 6.075s
nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-mirrorconfigs in 17m30.15s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-mirrorconfigs to have correct status in 50ms
nodepool_mirrorconfigs_test.go:81: Entering MirrorConfigs test
nodepool_mirrorconfigs_test.go:111: Hosted control plane namespace is e2e-clusters-klhjp-node-pool-xkr5j
nodepool_mirrorconfigs_test.go:113: Successfully waited for kubeletConfig should be mirrored and present in the hosted cluster in 3.05s
nodepool_mirrorconfigs_test.go:157: Deleting KubeletConfig configmap reference from nodepool ...
nodepool_mirrorconfigs_test.go:163: Successfully waited for KubeletConfig configmap to be deleted in 3.175s
nodepool_mirrorconfigs_test.go:101: Exiting MirrorConfigs test: OK
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-mirrorconfigs to have correct status in 50ms
nodepool_imagetype_test.go:45: test is only supported for AWS platform
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mmjll/node-pool-rrw7w in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
nodepool_additionalTrustBundlePropagation_test.go:38: Starting AdditionalTrustBundlePropagationTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-mmjll/node-pool-rrw7w-test-additional-trust-bundle-propagation in 9m0.05s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-mmjll/node-pool-rrw7w-test-additional-trust-bundle-propagation to have correct status in 50ms
nodepool_additionalTrustBundlePropagation_test.go:72: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle
nodepool_additionalTrustBundlePropagation_test.go:80: Successfully waited for Waiting for NodePool e2e-clusters-mmjll/node-pool-rrw7w-test-additional-trust-bundle-propagation to begin updating in 10.05s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-klhjp/node-pool-xkr5j in 2m12s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-klhjp/node-pool-xkr5j in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt
nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt
nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt
nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt
nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt
nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest
nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack
nodepool_imagetype_test.go:45: test is only supported for AWS platform
nodepool_kms_root_volume_test.go:39: test only supported on platform AWS
nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-mirrorconfigs in 17m30.15s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-mirrorconfigs to have correct status in 50ms
nodepool_mirrorconfigs_test.go:81: Entering MirrorConfigs test
nodepool_mirrorconfigs_test.go:111: Hosted control plane namespace is e2e-clusters-klhjp-node-pool-xkr5j
nodepool_mirrorconfigs_test.go:113: Successfully waited for kubeletConfig should be mirrored and present in the hosted cluster in 3.05s
nodepool_mirrorconfigs_test.go:157: Deleting KubeletConfig configmap reference from nodepool ...
nodepool_mirrorconfigs_test.go:163: Successfully waited for KubeletConfig configmap to be deleted in 3.175s
nodepool_mirrorconfigs_test.go:101: Exiting MirrorConfigs test: OK
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-mirrorconfigs to have correct status in 50ms
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest
nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest
nodepool_autorepair_test.go:42: test only supported on platform AWS
nodepool_day2_tags_test.go:43: test only supported on platform AWS
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-inplaceupgrade in 10m42.125s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-inplaceupgrade to have correct status in 50ms
nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints
nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-inplaceupgrade to have version 4.22.0-0.ci-2026-01-12-010229 in 50ms
nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-p0lv2gxr/release@sha256:03e81c806230ad0b09e484ef1bcbd0c6e9243a1b040e172d08bf314be0d79380
nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-inplaceupgrade to start the upgrade in 3.05s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-klhjp/node-pool-xkr5j-x4f7r in 13m18.075s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-x4f7r to have correct status in 50ms
nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release.
nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-x4f7r to have correct status in 50ms
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest.
nodepool_test.go:348: NodePool version is outside supported skew, validating condition only (skipping node readiness check)
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-fnjmn to have correct status in 6.075s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-replaceupgrade in 17m36.1s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-replaceupgrade to have correct status in 50ms
nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints
nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-replaceupgrade to have version 4.22.0-0.ci-2026-01-12-010229 in 50ms
nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-p0lv2gxr/release@sha256:03e81c806230ad0b09e484ef1bcbd0c6e9243a1b040e172d08bf314be0d79380
nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-klhjp/node-pool-xkr5j-test-replaceupgrade to start the upgrade in 3.05s
nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-klhjp/node-pool-xkr5j in 3m36.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-xkr5j.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
util.go:363: Successfully waited for a successful connection to the guest API server in 44.425s
util.go:565: Successfully waited for 0 nodes to become ready in 100ms
util.go:2949: Successfully waited for HostedCluster e2e-clusters-klhjp/node-pool-xkr5j to have valid conditions in 12m30.05s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-klhjp/node-pool-xkr5j in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-klhjp/node-pool-xkr5j in 175ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
nodepool_test.go:150: tests only supported on platform KubeVirt
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-mmjll/node-pool-rrw7w in 2m15s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mmjll/node-pool-rrw7w in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
nodepool_additionalTrustBundlePropagation_test.go:38: Starting AdditionalTrustBundlePropagationTest.
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-mmjll/node-pool-rrw7w-test-additional-trust-bundle-propagation in 9m0.05s
nodepool_test.go:395: Successfully waited for NodePool e2e-clusters-mmjll/node-pool-rrw7w-test-additional-trust-bundle-propagation to have correct status in 50ms
nodepool_additionalTrustBundlePropagation_test.go:72: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle
nodepool_additionalTrustBundlePropagation_test.go:80: Successfully waited for Waiting for NodePool e2e-clusters-mmjll/node-pool-rrw7w-test-additional-trust-bundle-propagation to begin updating in 10.05s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mmjll/node-pool-rrw7w in 4m27.125s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-rrw7w.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-rrw7w.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
util.go:363: Successfully waited for a successful connection to the guest API server in 17.45s
util.go:565: Successfully waited for 0 nodes to become ready in 75ms
util.go:2949: Successfully waited for HostedCluster e2e-clusters-mmjll/node-pool-rrw7w to have valid conditions in 12m6.075s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mmjll/node-pool-rrw7w in 75ms
util.go:301: Successfully waited for kubeconfig secret to have data in 75ms
util.go:565: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-mmjll/node-pool-rrw7w in 425ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
control_plane_upgrade_test.go:25: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-p0lv2gxr/release@sha256:b61cb6cdac8ceb723c4c2c0974b20d114626e7ae4bd277be2bd5398b3a2886ec, toImage: registry.build01.ci.openshift.org/ci-op-p0lv2gxr/release@sha256:03e81c806230ad0b09e484ef1bcbd0c6e9243a1b040e172d08bf314be0d79380
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-8447f/control-plane-upgrade-zftpd in 2m28s
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8447f/control-plane-upgrade-zftpd in 5m3.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-zftpd.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-zftpd.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-zftpd.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-zftpd.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
util.go:363: Successfully waited for a successful connection to the guest API server in 40.95s
util.go:565: Successfully waited for 2 nodes to become ready in 11m48.1s
util.go:598: Successfully waited for HostedCluster e2e-clusters-8447f/control-plane-upgrade-zftpd to rollout in 8m42.075s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-8447f/control-plane-upgrade-zftpd to have valid conditions in 50ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-8447f/control-plane-upgrade-zftpd in 150ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8447f/control-plane-upgrade-zftpd in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8447f/control-plane-upgrade-zftpd in 75ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8447f/control-plane-upgrade-zftpd in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-p0lv2gxr/release@sha256:03e81c806230ad0b09e484ef1bcbd0c6e9243a1b040e172d08bf314be0d79380
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8447f/control-plane-upgrade-zftpd in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:363: Successfully waited for a successful connection to the guest API server in 25ms
control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-p0lv2gxr/release@sha256:03e81c806230ad0b09e484ef1bcbd0c6e9243a1b040e172d08bf314be0d79380
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8447f/control-plane-upgrade-zftpd in 5m3.05s
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-zftpd.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-zftpd.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-zftpd.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout
eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-zftpd.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF
util.go:363: Successfully waited for a successful connection to the guest API server in 40.95s
util.go:565: Successfully waited for 2 nodes to become ready in 11m48.1s
util.go:598: Successfully waited for HostedCluster e2e-clusters-8447f/control-plane-upgrade-zftpd to rollout in 8m42.075s
util.go:2949: Successfully waited for HostedCluster e2e-clusters-8447f/control-plane-upgrade-zftpd to have valid conditions in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8447f/control-plane-upgrade-zftpd in 50ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:284: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8447f/control-plane-upgrade-zftpd in 75ms
util.go:301: Successfully waited for kubeconfig secret to have data in 50ms
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-8447f/control-plane-upgrade-zftpd in 150ms
util.go:4095: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster