PR #6377 - 09-15 22:17

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

232
Total Tests
55
Passed
162
Failed
15
Skipped

Failed Tests

TestAutoscaling
0s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-7tljq/autoscaling-f7hfv in 3m14s util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7tljq/autoscaling-f7hfv in 3m21.05s util.go:278: Successfully waited for kubeconfig secret to have data in 25ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-f7hfv.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:332: Successfully waited for a successful connection to the guest API server in 40.325s util.go:515: Successfully waited for 1 nodes to become ready in 11m36.025s util.go:548: Successfully waited for HostedCluster e2e-clusters-7tljq/autoscaling-f7hfv to rollout in 8m24.075s util.go:2724: Successfully waited for HostedCluster e2e-clusters-7tljq/autoscaling-f7hfv to have valid conditions in 75ms util.go:515: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7tljq/autoscaling-f7hfv in 175ms util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7tljq/autoscaling-f7hfv in 75ms util.go:278: Successfully waited for kubeconfig secret to have data in 75ms util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7tljq/autoscaling-f7hfv in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7tljq/autoscaling-f7hfv in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:332: Successfully waited for a successful connection to the guest API server in 25ms util.go:515: Successfully waited for 1 nodes to become ready in 75ms autoscaling_test.go:102: Enabled autoscaling. Namespace: e2e-clusters-7tljq, name: autoscaling-f7hfv, min: 1, max: 3 autoscaling_test.go:121: Created workload. Node: autoscaling-f7hfv-95nhp-pvh2r, memcapacity: 15221992Ki util.go:515: Successfully waited for 3 nodes to become ready in 9m39.1s autoscaling_test.go:133: Deleted workload
TestAutoscaling/Main
0s
TestAutoscaling/Main/TestAutoscaling
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7tljq/autoscaling-f7hfv in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:332: Successfully waited for a successful connection to the guest API server in 25ms util.go:515: Successfully waited for 1 nodes to become ready in 75ms autoscaling_test.go:102: Enabled autoscaling. Namespace: e2e-clusters-7tljq, name: autoscaling-f7hfv, min: 1, max: 3 autoscaling_test.go:121: Created workload. Node: autoscaling-f7hfv-95nhp-pvh2r, memcapacity: 15221992Ki util.go:515: Successfully waited for 3 nodes to become ready in 9m39.1s autoscaling_test.go:133: Deleted workload
TestAutoscaling/ValidateHostedCluster
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7tljq/autoscaling-f7hfv in 3m21.05s util.go:278: Successfully waited for kubeconfig secret to have data in 25ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-f7hfv.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:332: Successfully waited for a successful connection to the guest API server in 40.325s util.go:515: Successfully waited for 1 nodes to become ready in 11m36.025s util.go:548: Successfully waited for HostedCluster e2e-clusters-7tljq/autoscaling-f7hfv to rollout in 8m24.075s util.go:2724: Successfully waited for HostedCluster e2e-clusters-7tljq/autoscaling-f7hfv to have valid conditions in 75ms
TestAutoscaling/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestAutoscaling/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7tljq/autoscaling-f7hfv in 75ms util.go:278: Successfully waited for kubeconfig secret to have data in 75ms
TestAutoscaling/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7tljq/autoscaling-f7hfv in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms
TestAutoscaling/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:515: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-7tljq/autoscaling-f7hfv in 175ms
TestAutoscaling/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestAzureScheduler
0s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-c9dx9/azure-scheduler-kg2dg in 2m49s util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-c9dx9/azure-scheduler-kg2dg in 3m15.075s util.go:278: Successfully waited for kubeconfig secret to have data in 25ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-kg2dg.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:332: Successfully waited for a successful connection to the guest API server in 1m11.425s util.go:515: Successfully waited for 2 nodes to become ready in 12m30.075s util.go:548: Successfully waited for HostedCluster e2e-clusters-c9dx9/azure-scheduler-kg2dg to rollout in 3m36.05s util.go:2724: Successfully waited for HostedCluster e2e-clusters-c9dx9/azure-scheduler-kg2dg to have valid conditions in 50ms util.go:515: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-c9dx9/azure-scheduler-kg2dg in 150ms util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-c9dx9/azure-scheduler-kg2dg in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 50ms util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-c9dx9/azure-scheduler-kg2dg in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-c9dx9/azure-scheduler-kg2dg in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:332: Successfully waited for a successful connection to the guest API server in 25ms util.go:515: Successfully waited for 2 nodes to become ready in 50ms azure_scheduler_test.go:111: Updated clusterSizingConfig. azure_scheduler_test.go:158: Successfully waited for HostedCluster size label and annotations updated in 50ms azure_scheduler_test.go:150: Scaled Nodepool. Namespace: e2e-clusters-c9dx9, name: azure-scheduler-kg2dg, replicas: 0xc0028beb1c util.go:515: Successfully waited for 3 nodes to become ready in 7m47.6s azure_scheduler_test.go:158: Successfully waited for HostedCluster size label and annotations updated in 75ms azure_scheduler_test.go:182: Successfully waited for control-plane-operator pod is running with expected resource request in 50ms util.go:2999: Successfully waited for HostedCluster e2e-clusters-c9dx9/azure-scheduler-kg2dg to have valid Status.Payload in 100ms util.go:1069: Connecting to kubernetes endpoint on: https://20.84.194.117:443
TestAzureScheduler/EnsureHostedCluster
0s
TestAzureScheduler/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestAzureScheduler/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestAzureScheduler/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestAzureScheduler/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestAzureScheduler/EnsureHostedCluster/EnsureNetworkPolicies
0s
TestAzureScheduler/EnsureHostedCluster/EnsureNetworkPolicies/EnsureComponentsHaveNeedManagementKASAccessLabel
0s
TestAzureScheduler/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
0s
util.go:1069: Connecting to kubernetes endpoint on: https://20.84.194.117:443
TestAzureScheduler/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestAzureScheduler/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestAzureScheduler/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:2999: Successfully waited for HostedCluster e2e-clusters-c9dx9/azure-scheduler-kg2dg to have valid Status.Payload in 100ms
TestAzureScheduler/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestAzureScheduler/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestAzureScheduler/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestAzureScheduler/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
TestAzureScheduler/Main
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-c9dx9/azure-scheduler-kg2dg in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:332: Successfully waited for a successful connection to the guest API server in 25ms util.go:515: Successfully waited for 2 nodes to become ready in 50ms azure_scheduler_test.go:111: Updated clusterSizingConfig. azure_scheduler_test.go:158: Successfully waited for HostedCluster size label and annotations updated in 50ms azure_scheduler_test.go:150: Scaled Nodepool. Namespace: e2e-clusters-c9dx9, name: azure-scheduler-kg2dg, replicas: 0xc0028beb1c util.go:515: Successfully waited for 3 nodes to become ready in 7m47.6s azure_scheduler_test.go:158: Successfully waited for HostedCluster size label and annotations updated in 75ms azure_scheduler_test.go:182: Successfully waited for control-plane-operator pod is running with expected resource request in 50ms
TestAzureScheduler/ValidateHostedCluster
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-c9dx9/azure-scheduler-kg2dg in 3m15.075s util.go:278: Successfully waited for kubeconfig secret to have data in 25ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-kg2dg.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:332: Successfully waited for a successful connection to the guest API server in 1m11.425s util.go:515: Successfully waited for 2 nodes to become ready in 12m30.075s util.go:548: Successfully waited for HostedCluster e2e-clusters-c9dx9/azure-scheduler-kg2dg to rollout in 3m36.05s util.go:2724: Successfully waited for HostedCluster e2e-clusters-c9dx9/azure-scheduler-kg2dg to have valid conditions in 50ms
TestAzureScheduler/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestAzureScheduler/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-c9dx9/azure-scheduler-kg2dg in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 50ms
TestAzureScheduler/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-c9dx9/azure-scheduler-kg2dg in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms
TestAzureScheduler/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:515: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-c9dx9/azure-scheduler-kg2dg in 150ms
TestAzureScheduler/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestCreateCluster
0s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-659l7/create-cluster-xppvw in 3m8s util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-659l7/create-cluster-xppvw in 3m9.05s util.go:278: Successfully waited for kubeconfig secret to have data in 25ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-xppvw.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:332: Successfully waited for a successful connection to the guest API server in 57.625s util.go:515: Successfully waited for 2 nodes to become ready in 11m57.05s util.go:548: Successfully waited for HostedCluster e2e-clusters-659l7/create-cluster-xppvw to rollout in 5m27.075s util.go:2724: Successfully waited for HostedCluster e2e-clusters-659l7/create-cluster-xppvw to have valid conditions in 50ms util.go:515: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-659l7/create-cluster-xppvw in 150ms util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-659l7/create-cluster-xppvw in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-659l7/create-cluster-xppvw in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-659l7/create-cluster-xppvw in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:332: Successfully waited for a successful connection to the guest API server in 25ms create_cluster_test.go:1850: fetching mgmt kubeconfig util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-659l7/create-cluster-xppvw in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms control_plane_pki_operator.go:92: generating new break-glass credentials for more than one signer pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:201: creating CSR "2csdcdvaapmpoc8p1ior8mf8vdh2zrc13q9rvsjvj4wx" for signer "customer-break-glass", requesting client auth usages control_plane_pki_operator.go:211: creating CSRA e2e-clusters-659l7-create-cluster-xppvw/2csdcdvaapmpoc8p1ior8mf8vdh2zrc13q9rvsjvj4wx to trigger automatic approval of the CSR control_plane_pki_operator.go:218: Successfully waited for CSR "2csdcdvaapmpoc8p1ior8mf8vdh2zrc13q9rvsjvj4wx" to be approved and signed in 25ms control_plane_pki_operator.go:130: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:133: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:153: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:201: creating CSR "2nncfyrw70uz0grn3crlb93xmlqjis17tgmw7x9mg85y" for signer "sre-break-glass", requesting client auth usages control_plane_pki_operator.go:211: creating CSRA e2e-clusters-659l7-create-cluster-xppvw/2nncfyrw70uz0grn3crlb93xmlqjis17tgmw7x9mg85y to trigger automatic approval of the CSR control_plane_pki_operator.go:218: Successfully waited for CSR "2nncfyrw70uz0grn3crlb93xmlqjis17tgmw7x9mg85y" to be approved and signed in 50ms control_plane_pki_operator.go:130: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:133: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:153: ensuring that the SSR identifies the client certificate as having system:masters power and correct username control_plane_pki_operator.go:96: revoking the "customer-break-glass" signer pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:253: creating CRR e2e-clusters-659l7-create-cluster-xppvw/2csdcdvaapmpoc8p1ior8mf8vdh2zrc13q9rvsjvj4wx to trigger signer certificate revocation control_plane_pki_operator.go:260: Successfully waited for CRR e2e-clusters-659l7-create-cluster-xppvw/2csdcdvaapmpoc8p1ior8mf8vdh2zrc13q9rvsjvj4wx to complete in 2m12.05s control_plane_pki_operator.go:273: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:276: issuing SSR to confirm that we're not authorized to contact the server control_plane_pki_operator.go:99: ensuring the break-glass credentials from "sre-break-glass" signer still work pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:130: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:133: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:153: ensuring that the SSR identifies the client certificate as having system:masters power and correct username control_plane_pki_operator.go:63: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-659l7-create-cluster-xppvw/customer-system-admin-client-cert-key control_plane_pki_operator.go:130: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:133: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:153: ensuring that the SSR identifies the client certificate as having system:masters power and correct username control_plane_pki_operator.go:63: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-659l7-create-cluster-xppvw/sre-system-admin-client-cert-key control_plane_pki_operator.go:130: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:133: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:153: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:201: creating CSR "1999kog3ywj7gmdfpu6leb0ivzihy4l164c84v93mr34" for signer "sre-break-glass", requesting client auth usages control_plane_pki_operator.go:211: creating CSRA e2e-clusters-659l7-create-cluster-xppvw/1999kog3ywj7gmdfpu6leb0ivzihy4l164c84v93mr34 to trigger automatic approval of the CSR control_plane_pki_operator.go:218: Successfully waited for CSR "1999kog3ywj7gmdfpu6leb0ivzihy4l164c84v93mr34" to be approved and signed in 25ms control_plane_pki_operator.go:130: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:133: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:153: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:165: creating invalid CSR "35htfb515fmh77kx2eruzg2e1yv72vedbcdyaixnxxyf" for signer "hypershift.openshift.io/e2e-clusters-659l7-create-cluster-xppvw.sre-break-glass", requesting client auth usages control_plane_pki_operator.go:175: creating CSRA e2e-clusters-659l7-create-cluster-xppvw/35htfb515fmh77kx2eruzg2e1yv72vedbcdyaixnxxyf to trigger automatic approval of the CSR control_plane_pki_operator.go:181: Successfully waited for waiting for CSR "35htfb515fmh77kx2eruzg2e1yv72vedbcdyaixnxxyf" to have invalid CN exposed in status in 3.05s pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:201: creating CSR "2hgwu47i1493y3qmt62l0hdovvrsnmigsc3t727n2il3" for signer "customer-break-glass", requesting client auth usages control_plane_pki_operator.go:211: creating CSRA e2e-clusters-659l7-create-cluster-xppvw/2hgwu47i1493y3qmt62l0hdovvrsnmigsc3t727n2il3 to trigger automatic approval of the CSR control_plane_pki_operator.go:218: Successfully waited for CSR "2hgwu47i1493y3qmt62l0hdovvrsnmigsc3t727n2il3" to be approved and signed in 3.025s control_plane_pki_operator.go:130: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:133: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:153: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:165: creating invalid CSR "1fbxfz0dfl34p0mats9o721dgox4299efpntood39s2" for signer "hypershift.openshift.io/e2e-clusters-659l7-create-cluster-xppvw.customer-break-glass", requesting client auth usages control_plane_pki_operator.go:175: creating CSRA e2e-clusters-659l7-create-cluster-xppvw/1fbxfz0dfl34p0mats9o721dgox4299efpntood39s2 to trigger automatic approval of the CSR control_plane_pki_operator.go:181: Successfully waited for waiting for CSR "1fbxfz0dfl34p0mats9o721dgox4299efpntood39s2" to have invalid CN exposed in status in 25ms pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:253: creating CRR e2e-clusters-659l7-create-cluster-xppvw/2hsl2gzii4u2jpn9u952hmk41xayor89wcxn8kkm3b0z to trigger signer certificate revocation control_plane_pki_operator.go:260: Successfully waited for CRR e2e-clusters-659l7-create-cluster-xppvw/2hsl2gzii4u2jpn9u952hmk41xayor89wcxn8kkm3b0z to complete in 2m33.05s control_plane_pki_operator.go:273: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:276: issuing SSR to confirm that we're not authorized to contact the server pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:253: creating CRR e2e-clusters-659l7-create-cluster-xppvw/1uawupqjswkhgbo02vkm9vvwvbll4fmk7ke4qiopd3nm to trigger signer certificate revocation control_plane_pki_operator.go:260: Successfully waited for CRR e2e-clusters-659l7-create-cluster-xppvw/1uawupqjswkhgbo02vkm9vvwvbll4fmk7ke4qiopd3nm to complete in 3m36.05s control_plane_pki_operator.go:273: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:276: issuing SSR to confirm that we're not authorized to contact the server util.go:146: failed to patch object create-cluster-xppvw, will retry: HostedCluster.hypershift.openshift.io "create-cluster-xppvw" is invalid: [spec: Invalid value: "object": Services is immutable. Changes might result in unpredictable and disruptive behavior., spec: Invalid value: "object": Azure platform requires APIServer Route service with a hostname to be defined, spec.services[0].servicePublishingStrategy: Invalid value: "object": nodePort is required when type is NodePort, and forbidden otherwise, spec.services[0].servicePublishingStrategy: Invalid value: "object": only route is allowed when type is Route, and forbidden otherwise] util.go:146: failed to patch object create-cluster-xppvw, will retry: HostedCluster.hypershift.openshift.io "create-cluster-xppvw" is invalid: spec.controllerAvailabilityPolicy: Invalid value: "string": ControllerAvailabilityPolicy is immutable util.go:146: failed to patch object create-cluster-xppvw, will retry: HostedCluster.hypershift.openshift.io "create-cluster-xppvw" is invalid: spec.capabilities: Invalid value: "object": Capabilities is immutable. Changes might result in unpredictable and disruptive behavior. util.go:1993: Using Azure-specific retry strategy for DNS propagation race condition util.go:2002: Generating custom certificate with DNS name api-custom-cert-create-cluster-xppvw.aks-e2e.hypershift.azure.devcluster.openshift.com util.go:2007: Creating custom certificate secret util.go:2023: Updating hosted cluster with KubeAPIDNSName and KAS custom serving cert util.go:2059: Getting custom kubeconfig client util.go:224: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-659l7/create-cluster-xppvw in 3.05s util.go:241: Successfully waited for KAS custom kubeconfig secret to have data in 25ms util.go:2064: waiting for the KubeAPIDNSName to be reconciled util.go:224: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-659l7/create-cluster-xppvw in 50ms util.go:241: Successfully waited for KAS custom kubeconfig secret to have data in 75ms util.go:2076: Finding the external name destination for the KAS Service util.go:2101: service custom DNS name not found, using the control plane endpoint util.go:2101: service custom DNS name not found, using the control plane endpoint util.go:2101: service custom DNS name not found, using the control plane endpoint util.go:2113: Creating a new KAS Service to be used by the external-dns deployment in CI with the custom DNS name api-custom-cert-create-cluster-xppvw.aks-e2e.hypershift.azure.devcluster.openshift.com util.go:2135: [2025-09-15T23:32:31Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-xppvw.aks-e2e.hypershift.azure.devcluster.openshift.com util.go:2135: [2025-09-15T23:32:41Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-xppvw.aks-e2e.hypershift.azure.devcluster.openshift.com util.go:2140: resolved the custom DNS name after 10.02774428s util.go:2145: Waiting until the KAS Deployment is ready util.go:2240: Successfully waited for the KAS custom kubeconfig secret to be deleted from HC Namespace in 5.05s util.go:2256: Successfully waited for the KAS custom kubeconfig secret to be deleted from HCP Namespace in 50ms util.go:2286: Successfully waited for the KAS custom kubeconfig status to be removed in 50ms util.go:2319: Deleting custom certificate secret util.go:2160: Checking CustomAdminKubeconfigStatus are present util.go:2168: Checking CustomAdminKubeconfigs are present util.go:2181: Checking CustomAdminKubeconfig reaches the KAS util.go:2183: Using extended retry timeout for Azure DNS propagation util.go:2199: Successfully verified custom kubeconfig can reach KAS util.go:2205: Checking CustomAdminKubeconfig Infrastructure status is updated util.go:2206: Successfully waited for a successful connection to the custom DNS guest API server in 100ms util.go:2274: Checking CustomAdminKubeconfig are removed util.go:2303: Checking CustomAdminKubeconfigStatus are removed util.go:3127: This test is only applicable for AWS platform util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-659l7/create-cluster-xppvw in 75ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:332: Successfully waited for a successful connection to the guest API server in 25ms util.go:1916: Deleting the additional-pull-secret secret in the DataPlane util.go:3525: Creating a pod which uses the restricted image util.go:3550: Attempt 1/3: Creating pod util.go:3555: Successfully created pod global-pull-secret-fail-pod in namespace kube-system on attempt 1 util.go:3576: Created pod global-pull-secret-fail-pod in namespace kube-system util.go:3596: Pod is in the desired state, deleting it now util.go:3599: Deleted the pod util.go:3525: Creating a pod which uses the restricted image util.go:3550: Attempt 1/3: Creating pod util.go:3555: Successfully created pod global-pull-secret-success-pod in namespace kube-system on attempt 1 util.go:3576: Created pod global-pull-secret-success-pod in namespace kube-system util.go:3596: Pod is in the desired state, deleting it now util.go:3599: Deleted the pod globalps.go:225: Creating kubelet config verifier DaemonSet globalps.go:230: Waiting for DaemonSet to be ready globalps.go:235: Verifying all DaemonSet pods are running globalps.go:236: Successfully waited for DaemonSet pods to be running in 5.15s globalps.go:257: Cleaning up kubelet config verifier DaemonSet util.go:2999: Successfully waited for HostedCluster e2e-clusters-659l7/create-cluster-xppvw to have valid Status.Payload in 75ms util.go:1069: Connecting to kubernetes endpoint on: https://20.84.194.117:443
TestCreateCluster/EnsureHostedCluster
0s
TestCreateCluster/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestCreateCluster/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestCreateCluster/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestCreateCluster/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestCreateCluster/EnsureHostedCluster/EnsureNetworkPolicies
0s
TestCreateCluster/EnsureHostedCluster/EnsureNetworkPolicies/EnsureComponentsHaveNeedManagementKASAccessLabel
0s
TestCreateCluster/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
0s
util.go:1069: Connecting to kubernetes endpoint on: https://20.84.194.117:443
TestCreateCluster/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestCreateCluster/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestCreateCluster/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:2999: Successfully waited for HostedCluster e2e-clusters-659l7/create-cluster-xppvw to have valid Status.Payload in 75ms
TestCreateCluster/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestCreateCluster/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestCreateCluster/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestCreateCluster/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
TestCreateCluster/Main
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-659l7/create-cluster-xppvw in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:332: Successfully waited for a successful connection to the guest API server in 25ms create_cluster_test.go:1850: fetching mgmt kubeconfig util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-659l7/create-cluster-xppvw in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms
TestCreateCluster/Main/EnsureAppLabel
0s
TestCreateCluster/Main/EnsureCustomLabels
0s
TestCreateCluster/Main/EnsureCustomTolerations
0s
TestCreateCluster/Main/EnsureDefaultSecurityGroupTags
0s
util.go:3127: This test is only applicable for AWS platform
TestCreateCluster/Main/EnsureGlobalPullSecret
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-659l7/create-cluster-xppvw in 75ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:332: Successfully waited for a successful connection to the guest API server in 25ms util.go:1916: Deleting the additional-pull-secret secret in the DataPlane
TestCreateCluster/Main/EnsureGlobalPullSecret/Check_if_GlobalPullSecret_secret_is_in_the_right_place_at_Dataplane
0s
TestCreateCluster/Main/EnsureGlobalPullSecret/Check_if_GlobalPullSecret_secret_is_updated_in_the_DataPlane
0s
TestCreateCluster/Main/EnsureGlobalPullSecret/Check_if_the_DaemonSet_is_present_in_the_DataPlane
0s
TestCreateCluster/Main/EnsureGlobalPullSecret/Check_if_the_GlobalPullSecret_secret_is_deleted_in_the_DataPlane
0s
TestCreateCluster/Main/EnsureGlobalPullSecret/Check_if_the_additional_RBAC_is_present_in_the_DataPlane
0s
TestCreateCluster/Main/EnsureGlobalPullSecret/Check_if_the_config.json_is_correct_in_all_of_the_nodes
0s
globalps.go:225: Creating kubelet config verifier DaemonSet globalps.go:230: Waiting for DaemonSet to be ready globalps.go:235: Verifying all DaemonSet pods are running globalps.go:236: Successfully waited for DaemonSet pods to be running in 5.15s globalps.go:257: Cleaning up kubelet config verifier DaemonSet
TestCreateCluster/Main/EnsureGlobalPullSecret/Check_if_we_can_pull_other_restricted_images,_should_succeed
0s
TestCreateCluster/Main/EnsureGlobalPullSecret/Check_if_we_can_pull_restricted_images,_should_fail
0s
TestCreateCluster/Main/EnsureGlobalPullSecret/Create_a_pod_which_uses_the_restricted_image,_should_fail
0s
util.go:3525: Creating a pod which uses the restricted image util.go:3550: Attempt 1/3: Creating pod util.go:3555: Successfully created pod global-pull-secret-fail-pod in namespace kube-system on attempt 1 util.go:3576: Created pod global-pull-secret-fail-pod in namespace kube-system util.go:3596: Pod is in the desired state, deleting it now util.go:3599: Deleted the pod
TestCreateCluster/Main/EnsureGlobalPullSecret/Create_a_pod_which_uses_the_restricted_image,_should_succeed
0s
util.go:3525: Creating a pod which uses the restricted image util.go:3550: Attempt 1/3: Creating pod util.go:3555: Successfully created pod global-pull-secret-success-pod in namespace kube-system on attempt 1 util.go:3576: Created pod global-pull-secret-success-pod in namespace kube-system util.go:3596: Pod is in the desired state, deleting it now util.go:3599: Deleted the pod
TestCreateCluster/Main/EnsureGlobalPullSecret/Modify_the_additional-pull-secret_secret_in_the_DataPlane_by_adding_the_valid_pull_secret
0s
TestCreateCluster/Main/EnsureHostedClusterCapabilitiesImmutability
0s
util.go:146: failed to patch object create-cluster-xppvw, will retry: HostedCluster.hypershift.openshift.io "create-cluster-xppvw" is invalid: spec.capabilities: Invalid value: "object": Capabilities is immutable. Changes might result in unpredictable and disruptive behavior.
TestCreateCluster/Main/EnsureHostedClusterImmutability
0s
util.go:146: failed to patch object create-cluster-xppvw, will retry: HostedCluster.hypershift.openshift.io "create-cluster-xppvw" is invalid: [spec: Invalid value: "object": Services is immutable. Changes might result in unpredictable and disruptive behavior., spec: Invalid value: "object": Azure platform requires APIServer Route service with a hostname to be defined, spec.services[0].servicePublishingStrategy: Invalid value: "object": nodePort is required when type is NodePort, and forbidden otherwise, spec.services[0].servicePublishingStrategy: Invalid value: "object": only route is allowed when type is Route, and forbidden otherwise] util.go:146: failed to patch object create-cluster-xppvw, will retry: HostedCluster.hypershift.openshift.io "create-cluster-xppvw" is invalid: spec.controllerAvailabilityPolicy: Invalid value: "string": ControllerAvailabilityPolicy is immutable
TestCreateCluster/Main/EnsureKubeAPIDNSNameCustomCert
0s
util.go:1993: Using Azure-specific retry strategy for DNS propagation race condition util.go:2002: Generating custom certificate with DNS name api-custom-cert-create-cluster-xppvw.aks-e2e.hypershift.azure.devcluster.openshift.com util.go:2007: Creating custom certificate secret util.go:2023: Updating hosted cluster with KubeAPIDNSName and KAS custom serving cert util.go:2059: Getting custom kubeconfig client util.go:224: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-659l7/create-cluster-xppvw in 3.05s util.go:241: Successfully waited for KAS custom kubeconfig secret to have data in 25ms util.go:2064: waiting for the KubeAPIDNSName to be reconciled util.go:224: Successfully waited for KAS custom kubeconfig to be published for HostedCluster e2e-clusters-659l7/create-cluster-xppvw in 50ms util.go:241: Successfully waited for KAS custom kubeconfig secret to have data in 75ms util.go:2076: Finding the external name destination for the KAS Service util.go:2101: service custom DNS name not found, using the control plane endpoint util.go:2101: service custom DNS name not found, using the control plane endpoint util.go:2101: service custom DNS name not found, using the control plane endpoint util.go:2113: Creating a new KAS Service to be used by the external-dns deployment in CI with the custom DNS name api-custom-cert-create-cluster-xppvw.aks-e2e.hypershift.azure.devcluster.openshift.com util.go:2135: [2025-09-15T23:32:31Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-xppvw.aks-e2e.hypershift.azure.devcluster.openshift.com util.go:2135: [2025-09-15T23:32:41Z] Waiting until the URL is resolvable: api-custom-cert-create-cluster-xppvw.aks-e2e.hypershift.azure.devcluster.openshift.com util.go:2140: resolved the custom DNS name after 10.02774428s util.go:2145: Waiting until the KAS Deployment is ready util.go:2240: Successfully waited for the KAS custom kubeconfig secret to be deleted from HC Namespace in 5.05s util.go:2256: Successfully waited for the KAS custom kubeconfig secret to be deleted from HCP Namespace in 50ms util.go:2286: Successfully waited for the KAS custom kubeconfig status to be removed in 50ms util.go:2319: Deleting custom certificate secret
TestCreateCluster/Main/EnsureKubeAPIDNSNameCustomCert/EnsureCustomAdminKubeconfigExists
0s
util.go:2168: Checking CustomAdminKubeconfigs are present
TestCreateCluster/Main/EnsureKubeAPIDNSNameCustomCert/EnsureCustomAdminKubeconfigInfraStatusIsUpdated
0s
util.go:2205: Checking CustomAdminKubeconfig Infrastructure status is updated util.go:2206: Successfully waited for a successful connection to the custom DNS guest API server in 100ms
TestCreateCluster/Main/EnsureKubeAPIDNSNameCustomCert/EnsureCustomAdminKubeconfigIsRemoved
0s
util.go:2274: Checking CustomAdminKubeconfig are removed
TestCreateCluster/Main/EnsureKubeAPIDNSNameCustomCert/EnsureCustomAdminKubeconfigReachesTheKAS
0s
util.go:2181: Checking CustomAdminKubeconfig reaches the KAS util.go:2183: Using extended retry timeout for Azure DNS propagation util.go:2199: Successfully verified custom kubeconfig can reach KAS
TestCreateCluster/Main/EnsureKubeAPIDNSNameCustomCert/EnsureCustomAdminKubeconfigStatusExists
0s
util.go:2160: Checking CustomAdminKubeconfigStatus are present
TestCreateCluster/Main/EnsureKubeAPIDNSNameCustomCert/EnsureCustomAdminKubeconfigStatusIsRemoved
0s
util.go:2303: Checking CustomAdminKubeconfigStatus are removed
TestCreateCluster/Main/EnsureKubeAPIServerAllowedCIDRs
0s
TestCreateCluster/Main/break-glass-credentials
0s
TestCreateCluster/Main/break-glass-credentials/customer-break-glass
0s
TestCreateCluster/Main/break-glass-credentials/customer-break-glass/CSR_flow
0s
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:201: creating CSR "2hgwu47i1493y3qmt62l0hdovvrsnmigsc3t727n2il3" for signer "customer-break-glass", requesting client auth usages control_plane_pki_operator.go:211: creating CSRA e2e-clusters-659l7-create-cluster-xppvw/2hgwu47i1493y3qmt62l0hdovvrsnmigsc3t727n2il3 to trigger automatic approval of the CSR control_plane_pki_operator.go:218: Successfully waited for CSR "2hgwu47i1493y3qmt62l0hdovvrsnmigsc3t727n2il3" to be approved and signed in 3.025s control_plane_pki_operator.go:130: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:133: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:153: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/Main/break-glass-credentials/customer-break-glass/CSR_flow/invalid_CN_flagged_in_status
0s
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:165: creating invalid CSR "1fbxfz0dfl34p0mats9o721dgox4299efpntood39s2" for signer "hypershift.openshift.io/e2e-clusters-659l7-create-cluster-xppvw.customer-break-glass", requesting client auth usages control_plane_pki_operator.go:175: creating CSRA e2e-clusters-659l7-create-cluster-xppvw/1fbxfz0dfl34p0mats9o721dgox4299efpntood39s2 to trigger automatic approval of the CSR control_plane_pki_operator.go:181: Successfully waited for waiting for CSR "1fbxfz0dfl34p0mats9o721dgox4299efpntood39s2" to have invalid CN exposed in status in 25ms
TestCreateCluster/Main/break-glass-credentials/customer-break-glass/CSR_flow/revocation
0s
pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:253: creating CRR e2e-clusters-659l7-create-cluster-xppvw/1uawupqjswkhgbo02vkm9vvwvbll4fmk7ke4qiopd3nm to trigger signer certificate revocation control_plane_pki_operator.go:260: Successfully waited for CRR e2e-clusters-659l7-create-cluster-xppvw/1uawupqjswkhgbo02vkm9vvwvbll4fmk7ke4qiopd3nm to complete in 3m36.05s control_plane_pki_operator.go:273: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:276: issuing SSR to confirm that we're not authorized to contact the server
TestCreateCluster/Main/break-glass-credentials/customer-break-glass/direct_fetch
0s
control_plane_pki_operator.go:63: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-659l7-create-cluster-xppvw/customer-system-admin-client-cert-key control_plane_pki_operator.go:130: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:133: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:153: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/Main/break-glass-credentials/independent_signers
0s
control_plane_pki_operator.go:92: generating new break-glass credentials for more than one signer pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:201: creating CSR "2csdcdvaapmpoc8p1ior8mf8vdh2zrc13q9rvsjvj4wx" for signer "customer-break-glass", requesting client auth usages control_plane_pki_operator.go:211: creating CSRA e2e-clusters-659l7-create-cluster-xppvw/2csdcdvaapmpoc8p1ior8mf8vdh2zrc13q9rvsjvj4wx to trigger automatic approval of the CSR control_plane_pki_operator.go:218: Successfully waited for CSR "2csdcdvaapmpoc8p1ior8mf8vdh2zrc13q9rvsjvj4wx" to be approved and signed in 25ms control_plane_pki_operator.go:130: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:133: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:153: ensuring that the SSR identifies the client certificate as having system:masters power and correct username pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:201: creating CSR "2nncfyrw70uz0grn3crlb93xmlqjis17tgmw7x9mg85y" for signer "sre-break-glass", requesting client auth usages control_plane_pki_operator.go:211: creating CSRA e2e-clusters-659l7-create-cluster-xppvw/2nncfyrw70uz0grn3crlb93xmlqjis17tgmw7x9mg85y to trigger automatic approval of the CSR control_plane_pki_operator.go:218: Successfully waited for CSR "2nncfyrw70uz0grn3crlb93xmlqjis17tgmw7x9mg85y" to be approved and signed in 50ms control_plane_pki_operator.go:130: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:133: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:153: ensuring that the SSR identifies the client certificate as having system:masters power and correct username control_plane_pki_operator.go:96: revoking the "customer-break-glass" signer pki.go:76: loading certificate/key pair from disk for signer customer-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:253: creating CRR e2e-clusters-659l7-create-cluster-xppvw/2csdcdvaapmpoc8p1ior8mf8vdh2zrc13q9rvsjvj4wx to trigger signer certificate revocation control_plane_pki_operator.go:260: Successfully waited for CRR e2e-clusters-659l7-create-cluster-xppvw/2csdcdvaapmpoc8p1ior8mf8vdh2zrc13q9rvsjvj4wx to complete in 2m12.05s control_plane_pki_operator.go:273: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:276: issuing SSR to confirm that we're not authorized to contact the server control_plane_pki_operator.go:99: ensuring the break-glass credentials from "sre-break-glass" signer still work pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:130: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:133: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:153: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/Main/break-glass-credentials/sre-break-glass
0s
TestCreateCluster/Main/break-glass-credentials/sre-break-glass/CSR_flow
0s
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:201: creating CSR "1999kog3ywj7gmdfpu6leb0ivzihy4l164c84v93mr34" for signer "sre-break-glass", requesting client auth usages control_plane_pki_operator.go:211: creating CSRA e2e-clusters-659l7-create-cluster-xppvw/1999kog3ywj7gmdfpu6leb0ivzihy4l164c84v93mr34 to trigger automatic approval of the CSR control_plane_pki_operator.go:218: Successfully waited for CSR "1999kog3ywj7gmdfpu6leb0ivzihy4l164c84v93mr34" to be approved and signed in 25ms control_plane_pki_operator.go:130: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:133: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:153: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/Main/break-glass-credentials/sre-break-glass/CSR_flow/invalid_CN_flagged_in_status
0s
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:165: creating invalid CSR "35htfb515fmh77kx2eruzg2e1yv72vedbcdyaixnxxyf" for signer "hypershift.openshift.io/e2e-clusters-659l7-create-cluster-xppvw.sre-break-glass", requesting client auth usages control_plane_pki_operator.go:175: creating CSRA e2e-clusters-659l7-create-cluster-xppvw/35htfb515fmh77kx2eruzg2e1yv72vedbcdyaixnxxyf to trigger automatic approval of the CSR control_plane_pki_operator.go:181: Successfully waited for waiting for CSR "35htfb515fmh77kx2eruzg2e1yv72vedbcdyaixnxxyf" to have invalid CN exposed in status in 3.05s
TestCreateCluster/Main/break-glass-credentials/sre-break-glass/CSR_flow/revocation
0s
pki.go:76: loading certificate/key pair from disk for signer sre-break-glass, use $REGENERATE_PKI to generate new ones control_plane_pki_operator.go:253: creating CRR e2e-clusters-659l7-create-cluster-xppvw/2hsl2gzii4u2jpn9u952hmk41xayor89wcxn8kkm3b0z to trigger signer certificate revocation control_plane_pki_operator.go:260: Successfully waited for CRR e2e-clusters-659l7-create-cluster-xppvw/2hsl2gzii4u2jpn9u952hmk41xayor89wcxn8kkm3b0z to complete in 2m33.05s control_plane_pki_operator.go:273: creating a client using the a certificate from the revoked signer control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:276: issuing SSR to confirm that we're not authorized to contact the server
TestCreateCluster/Main/break-glass-credentials/sre-break-glass/direct_fetch
0s
control_plane_pki_operator.go:63: Grabbing customer break-glass credentials from client certificate secret e2e-clusters-659l7-create-cluster-xppvw/sre-system-admin-client-cert-key control_plane_pki_operator.go:130: validating that the client certificate provides the appropriate access control_plane_pki_operator.go:116: amending the existing kubeconfig to use break-glass client certificate credentials control_plane_pki_operator.go:133: issuing SSR to identify the subject we are given using the client certificate control_plane_pki_operator.go:153: ensuring that the SSR identifies the client certificate as having system:masters power and correct username
TestCreateCluster/ValidateHostedCluster
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-659l7/create-cluster-xppvw in 3m9.05s util.go:278: Successfully waited for kubeconfig secret to have data in 25ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-xppvw.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:332: Successfully waited for a successful connection to the guest API server in 57.625s util.go:515: Successfully waited for 2 nodes to become ready in 11m57.05s util.go:548: Successfully waited for HostedCluster e2e-clusters-659l7/create-cluster-xppvw to rollout in 5m27.075s util.go:2724: Successfully waited for HostedCluster e2e-clusters-659l7/create-cluster-xppvw to have valid conditions in 50ms
TestCreateCluster/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestCreateCluster/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-659l7/create-cluster-xppvw in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms
TestCreateCluster/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-659l7/create-cluster-xppvw in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms
TestCreateCluster/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:515: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-659l7/create-cluster-xppvw in 150ms
TestCreateCluster/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestCreateClusterCustomConfig
38m40.41s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-65xdx/custom-config-ldkb5 in 3m11s hypershift_framework.go:457: Destroyed cluster. Namespace: e2e-clusters-65xdx, name: custom-config-ldkb5 hypershift_framework.go:412: archiving /logs/artifacts/TestCreateClusterCustomConfig/hostedcluster-custom-config-ldkb5 to /logs/artifacts/TestCreateClusterCustomConfig/hostedcluster.tar.gz util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-65xdx/custom-config-ldkb5 in 2m57.05s util.go:278: Successfully waited for kubeconfig secret to have data in 25ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-ldkb5.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-ldkb5.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-ldkb5.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-ldkb5.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-ldkb5.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF util.go:332: Successfully waited for a successful connection to the guest API server in 2m20.8s util.go:515: Successfully waited for 2 nodes to become ready in 11m3.075s util.go:548: Successfully waited for HostedCluster e2e-clusters-65xdx/custom-config-ldkb5 to rollout in 7m33.05s util.go:2724: Successfully waited for HostedCluster e2e-clusters-65xdx/custom-config-ldkb5 to have valid conditions in 50ms
TestCreateClusterCustomConfig/EnsureHostedCluster
4.09s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureNetworkPolicies
3.5s
TestCreateClusterCustomConfig/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
3.46s
util.go:1069: Connecting to kubernetes endpoint on: https://20.84.194.117:443
TestNodePool
0s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-mvn8v/node-pool-pwqlg in 2m44s nodepool_test.go:139: tests only supported on platform KubeVirt hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-dp2wf/node-pool-gbrdq in 3m8s util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mvn8v/node-pool-pwqlg in 3m27.05s util.go:278: Successfully waited for kubeconfig secret to have data in 25ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-pwqlg.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:332: Successfully waited for a successful connection to the guest API server in 1m4.4s util.go:515: Successfully waited for 0 nodes to become ready in 125ms util.go:2724: Successfully waited for HostedCluster e2e-clusters-mvn8v/node-pool-pwqlg to have valid conditions in 9m48.075s util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dp2wf/node-pool-gbrdq in 3m12.05s util.go:278: Successfully waited for kubeconfig secret to have data in 25ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-gbrdq.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:332: Successfully waited for a successful connection to the guest API server in 57.425s util.go:515: Successfully waited for 0 nodes to become ready in 75ms util.go:2724: Successfully waited for HostedCluster e2e-clusters-dp2wf/node-pool-gbrdq to have valid conditions in 8m42.075s util.go:515: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-dp2wf/node-pool-gbrdq in 200ms util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dp2wf/node-pool-gbrdq in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 50ms util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dp2wf/node-pool-gbrdq in 75ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:332: Successfully waited for a successful connection to the guest API server in 25ms nodepool_additionalTrustBundlePropagation_test.go:39: Starting AdditionalTrustBundlePropagationTest. util.go:515: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-dp2wf/node-pool-gbrdq-test-additional-trust-bundle-propagation in 10m51.075s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-dp2wf/node-pool-gbrdq-test-additional-trust-bundle-propagation to have correct status in 25ms util.go:515: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-mvn8v/node-pool-pwqlg in 200ms util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mvn8v/node-pool-pwqlg in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mvn8v/node-pool-pwqlg in 75ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:332: Successfully waited for a successful connection to the guest API server in 50ms nodepool_kms_root_volume_test.go:40: test only supported on platform AWS nodepool_autorepair_test.go:43: test only supported on platform AWS nodepool_machineconfig_test.go:55: Starting test NodePoolMachineconfigRolloutTest nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest util.go:515: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-ntomachineconfig-inplace in 14m9.1s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-ntomachineconfig-inplace to have correct status in 25ms util.go:429: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-ntomachineconfig-inplace to start config update in 45.025s nodepool_upgrade_test.go:100: starting test NodePoolUpgradeTest util.go:515: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-replaceupgrade in 13m54.15s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-replaceupgrade to have correct status in 3.025s nodepool_upgrade_test.go:160: Validating all Nodes have the synced labels and taints nodepool_upgrade_test.go:163: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-replaceupgrade to have version 4.21.0-0.ci-2025-09-15-141109 in 50ms nodepool_upgrade_test.go:180: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-6wwkqk7j/release@sha256:39d75802e963d1fe1c8746dcbf27a8bdb8cc492bb8eb35f6fca62d9a11a6fea3 nodepool_upgrade_test.go:187: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-replaceupgrade to start the upgrade in 3.05s nodepool_upgrade_test.go:100: starting test NodePoolUpgradeTest util.go:515: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-inplaceupgrade in 14m3.05s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-inplaceupgrade to have correct status in 3.05s nodepool_upgrade_test.go:160: Validating all Nodes have the synced labels and taints nodepool_upgrade_test.go:163: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-inplaceupgrade to have version 4.21.0-0.ci-2025-09-15-141109 in 50ms nodepool_upgrade_test.go:180: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-6wwkqk7j/release@sha256:39d75802e963d1fe1c8746dcbf27a8bdb8cc492bb8eb35f6fca62d9a11a6fea3 nodepool_upgrade_test.go:187: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-inplaceupgrade to start the upgrade in 3.05s nodepool_kv_cache_image_test.go:43: test only supported on platform KubeVirt nodepool_day2_tags_test.go:44: test only supported on platform AWS nodepool_kv_qos_guaranteed_test.go:44: test only supported on platform KubeVirt nodepool_kv_jsonpatch_test.go:43: test only supported on platform KubeVirt nodepool_kv_nodeselector_test.go:49: test only supported on platform KubeVirt nodepool_kv_multinet_test.go:37: test only supported on platform KubeVirt nodepool_osp_advanced_test.go:54: Starting test OpenStackAdvancedTest nodepool_osp_advanced_test.go:57: test only supported on platform OpenStack nodepool_nto_performanceprofile_test.go:60: Starting test NTOPerformanceProfileTest util.go:515: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-ntoperformanceprofile in 13m57.1s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-ntoperformanceprofile to have correct status in 50ms nodepool_nto_performanceprofile_test.go:81: Entering NTO PerformanceProfile test nodepool_nto_performanceprofile_test.go:111: Hosted control plane namespace is e2e-clusters-mvn8v-node-pool-pwqlg nodepool_nto_performanceprofile_test.go:113: Successfully waited for performance profile ConfigMap to exist with correct name labels and annotations in 3.05s nodepool_nto_performanceprofile_test.go:160: Successfully waited for performance profile status ConfigMap to exist in 125ms nodepool_nto_performanceprofile_test.go:202: Successfully waited for performance profile status to be reflected under the NodePool status in 50ms nodepool_nto_performanceprofile_test.go:255: Deleting configmap reference from nodepool ... nodepool_nto_performanceprofile_test.go:262: Successfully waited for performance profile ConfigMap to be deleted in 3.025s nodepool_nto_performanceprofile_test.go:281: Ending NTO PerformanceProfile test: OK nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-ntoperformanceprofile to have correct status in 3.05s nodepool_prev_release_test.go:31: Starting NodePoolPrevReleaseCreateTest. nodepool_prev_release_test.go:31: Starting NodePoolPrevReleaseCreateTest. nodepool_mirrorconfigs_test.go:61: Starting test MirrorConfigsTest nodepool_additionalTrustBundlePropagation_test.go:73: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle nodepool_additionalTrustBundlePropagation_test.go:81: Successfully waited for Waiting for NodePool e2e-clusters-dp2wf/node-pool-gbrdq-test-additional-trust-bundle-propagation to begin updating in 10.05s
TestNodePool/HostedCluster0
0s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-mvn8v/node-pool-pwqlg in 2m44s
TestNodePool/HostedCluster0/Main
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mvn8v/node-pool-pwqlg in 75ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:332: Successfully waited for a successful connection to the guest API server in 50ms
TestNodePool/HostedCluster0/Main/KubeVirtCacheTest
0s
nodepool_kv_cache_image_test.go:43: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtJsonPatchTest
0s
nodepool_kv_jsonpatch_test.go:43: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtNodeMultinetTest
0s
nodepool_kv_multinet_test.go:37: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtNodeSelectorTest
0s
nodepool_kv_nodeselector_test.go:49: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtQoSClassGuaranteedTest
0s
nodepool_kv_qos_guaranteed_test.go:44: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/OpenStackAdvancedTest
0s
nodepool_osp_advanced_test.go:54: Starting test OpenStackAdvancedTest nodepool_osp_advanced_test.go:57: test only supported on platform OpenStack
TestNodePool/HostedCluster0/Main/TestKMSRootVolumeEncryption
0s
nodepool_kms_root_volume_test.go:40: test only supported on platform AWS
TestNodePool/HostedCluster0/Main/TestMirrorConfigs
0s
nodepool_mirrorconfigs_test.go:61: Starting test MirrorConfigsTest
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace
0s
nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest util.go:515: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-ntomachineconfig-inplace in 14m9.1s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-ntomachineconfig-inplace to have correct status in 25ms util.go:429: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-ntomachineconfig-inplace to start config update in 45.025s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut
0s
nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest
TestNodePool/HostedCluster0/Main/TestNTOPerformanceProfile
0s
nodepool_nto_performanceprofile_test.go:60: Starting test NTOPerformanceProfileTest util.go:515: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-ntoperformanceprofile in 13m57.1s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-ntoperformanceprofile to have correct status in 50ms nodepool_nto_performanceprofile_test.go:81: Entering NTO PerformanceProfile test nodepool_nto_performanceprofile_test.go:111: Hosted control plane namespace is e2e-clusters-mvn8v-node-pool-pwqlg nodepool_nto_performanceprofile_test.go:113: Successfully waited for performance profile ConfigMap to exist with correct name labels and annotations in 3.05s nodepool_nto_performanceprofile_test.go:160: Successfully waited for performance profile status ConfigMap to exist in 125ms nodepool_nto_performanceprofile_test.go:202: Successfully waited for performance profile status to be reflected under the NodePool status in 50ms nodepool_nto_performanceprofile_test.go:255: Deleting configmap reference from nodepool ... nodepool_nto_performanceprofile_test.go:262: Successfully waited for performance profile ConfigMap to be deleted in 3.025s nodepool_nto_performanceprofile_test.go:281: Ending NTO PerformanceProfile test: OK nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-ntoperformanceprofile to have correct status in 3.05s
TestNodePool/HostedCluster0/Main/TestNodePoolAutoRepair
0s
nodepool_autorepair_test.go:43: test only supported on platform AWS
TestNodePool/HostedCluster0/Main/TestNodePoolDay2Tags
0s
nodepool_day2_tags_test.go:44: test only supported on platform AWS
TestNodePool/HostedCluster0/Main/TestNodePoolInPlaceUpgrade
0s
nodepool_upgrade_test.go:100: starting test NodePoolUpgradeTest util.go:515: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-inplaceupgrade in 14m3.05s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-inplaceupgrade to have correct status in 3.05s nodepool_upgrade_test.go:160: Validating all Nodes have the synced labels and taints nodepool_upgrade_test.go:163: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-inplaceupgrade to have version 4.21.0-0.ci-2025-09-15-141109 in 50ms nodepool_upgrade_test.go:180: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-6wwkqk7j/release@sha256:39d75802e963d1fe1c8746dcbf27a8bdb8cc492bb8eb35f6fca62d9a11a6fea3 nodepool_upgrade_test.go:187: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-inplaceupgrade to start the upgrade in 3.05s
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN1
0s
nodepool_prev_release_test.go:31: Starting NodePoolPrevReleaseCreateTest.
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN2
0s
nodepool_prev_release_test.go:31: Starting NodePoolPrevReleaseCreateTest.
TestNodePool/HostedCluster0/Main/TestNodePoolReplaceUpgrade
0s
nodepool_upgrade_test.go:100: starting test NodePoolUpgradeTest util.go:515: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-replaceupgrade in 13m54.15s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-replaceupgrade to have correct status in 3.025s nodepool_upgrade_test.go:160: Validating all Nodes have the synced labels and taints nodepool_upgrade_test.go:163: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-replaceupgrade to have version 4.21.0-0.ci-2025-09-15-141109 in 50ms nodepool_upgrade_test.go:180: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-6wwkqk7j/release@sha256:39d75802e963d1fe1c8746dcbf27a8bdb8cc492bb8eb35f6fca62d9a11a6fea3 nodepool_upgrade_test.go:187: Successfully waited for NodePool e2e-clusters-mvn8v/node-pool-pwqlg-test-replaceupgrade to start the upgrade in 3.05s
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout
0s
nodepool_machineconfig_test.go:55: Starting test NodePoolMachineconfigRolloutTest
TestNodePool/HostedCluster0/Main/TestRollingUpgrade
0s
TestNodePool/HostedCluster0/ValidateHostedCluster
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mvn8v/node-pool-pwqlg in 3m27.05s util.go:278: Successfully waited for kubeconfig secret to have data in 25ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-pwqlg.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:332: Successfully waited for a successful connection to the guest API server in 1m4.4s util.go:515: Successfully waited for 0 nodes to become ready in 125ms util.go:2724: Successfully waited for HostedCluster e2e-clusters-mvn8v/node-pool-pwqlg to have valid conditions in 9m48.075s
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mvn8v/node-pool-pwqlg in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:515: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-mvn8v/node-pool-pwqlg in 200ms
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestNodePool/HostedCluster1
0s
nodepool_test.go:139: tests only supported on platform KubeVirt
TestNodePool/HostedCluster2
0s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-dp2wf/node-pool-gbrdq in 3m8s
TestNodePool/HostedCluster2/Main
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dp2wf/node-pool-gbrdq in 75ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:332: Successfully waited for a successful connection to the guest API server in 25ms
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation
0s
nodepool_additionalTrustBundlePropagation_test.go:39: Starting AdditionalTrustBundlePropagationTest. util.go:515: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-dp2wf/node-pool-gbrdq-test-additional-trust-bundle-propagation in 10m51.075s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-dp2wf/node-pool-gbrdq-test-additional-trust-bundle-propagation to have correct status in 25ms
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation/AdditionalTrustBundlePropagationTest
0s
nodepool_additionalTrustBundlePropagation_test.go:73: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle nodepool_additionalTrustBundlePropagation_test.go:81: Successfully waited for Waiting for NodePool e2e-clusters-dp2wf/node-pool-gbrdq-test-additional-trust-bundle-propagation to begin updating in 10.05s
TestNodePool/HostedCluster2/ValidateHostedCluster
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dp2wf/node-pool-gbrdq in 3m12.05s util.go:278: Successfully waited for kubeconfig secret to have data in 25ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-gbrdq.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:332: Successfully waited for a successful connection to the guest API server in 57.425s util.go:515: Successfully waited for 0 nodes to become ready in 75ms util.go:2724: Successfully waited for HostedCluster e2e-clusters-dp2wf/node-pool-gbrdq to have valid conditions in 8m42.075s
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dp2wf/node-pool-gbrdq in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 50ms
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:515: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-dp2wf/node-pool-gbrdq in 200ms
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestUpgradeControlPlane
0s
control_plane_upgrade_test.go:27: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-6wwkqk7j/release@sha256:75e2b0dd38bd935f91cac4daee5478a67d9186ecab5bb21cdbf3a08d39a1c5c9, toImage: registry.build01.ci.openshift.org/ci-op-6wwkqk7j/release@sha256:39d75802e963d1fe1c8746dcbf27a8bdb8cc492bb8eb35f6fca62d9a11a6fea3 hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 in 2m44s util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 in 4m3.075s util.go:278: Successfully waited for kubeconfig secret to have data in 50ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-ttbj6.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:332: Successfully waited for a successful connection to the guest API server in 27.575s util.go:515: Successfully waited for 2 nodes to become ready in 12m3.075s util.go:548: Successfully waited for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 to rollout in 5m15.05s util.go:2724: Successfully waited for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 to have valid conditions in 50ms util.go:515: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-4wssd/control-plane-upgrade-ttbj6 in 200ms util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:332: Successfully waited for a successful connection to the guest API server in 50ms control_plane_upgrade_test.go:49: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-6wwkqk7j/release@sha256:39d75802e963d1fe1c8746dcbf27a8bdb8cc492bb8eb35f6fca62d9a11a6fea3 util.go:548: Successfully waited for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 to rollout in 45.075s util.go:588: Successfully waited for control plane components to complete rollout in 13m0.075s control_plane_upgrade_test.go:107: Validation passed util.go:515: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-4wssd/control-plane-upgrade-ttbj6 in 100ms util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:2999: Successfully waited for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 to have valid Status.Payload in 75ms util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/7 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/7 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/7 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/7 nodes are available: 1 Insufficient memory, 1 node(s) had untolerated taint {ToBeDeletedByClusterAutoscaler: 1757979165}, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/7 nodes are available: 1 Insufficient memory, 1 node(s) had untolerated taint {ToBeDeletedByClusterAutoscaler: 1757979165}, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 1 Too many pods, 4 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 1 Too many pods, 4 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 1 Too many pods, 4 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 1 Too many pods, 4 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 1 Too many pods, 4 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/7 nodes are available: 1 Insufficient memory, 1 node(s) had untolerated taint {ToBeDeletedByClusterAutoscaler: 1757979165}, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/7 nodes are available: 1 Insufficient memory, 1 node(s) had untolerated taint {ToBeDeletedByClusterAutoscaler: 1757979165}, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:1069: Connecting to kubernetes endpoint on: https://20.84.194.117:443
TestUpgradeControlPlane/EnsureHostedCluster
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureNetworkPolicies
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureNetworkPolicies/EnsureComponentsHaveNeedManagementKASAccessLabel
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
0s
util.go:1069: Connecting to kubernetes endpoint on: https://20.84.194.117:443
TestUpgradeControlPlane/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:2999: Successfully waited for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 to have valid Status.Payload in 75ms
TestUpgradeControlPlane/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestUpgradeControlPlane/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/7 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/7 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/7 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/7 nodes are available: 1 Insufficient memory, 1 node(s) had untolerated taint {ToBeDeletedByClusterAutoscaler: 1757979165}, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/7 nodes are available: 1 Insufficient memory, 1 node(s) had untolerated taint {ToBeDeletedByClusterAutoscaler: 1757979165}, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 1 Too many pods, 4 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 1 Too many pods, 4 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 1 Too many pods, 4 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 1 Too many pods, 4 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 1 Too many pods, 4 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/7 nodes are available: 1 Insufficient memory, 1 node(s) had untolerated taint {ToBeDeletedByClusterAutoscaler: 1757979165}, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:797: error: non-fatal, observed FailedScheduling or Preempted event: 0/7 nodes are available: 1 Insufficient memory, 1 node(s) had untolerated taint {ToBeDeletedByClusterAutoscaler: 1757979165}, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
TestUpgradeControlPlane/Main
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms util.go:332: Successfully waited for a successful connection to the guest API server in 50ms control_plane_upgrade_test.go:49: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-6wwkqk7j/release@sha256:39d75802e963d1fe1c8746dcbf27a8bdb8cc492bb8eb35f6fca62d9a11a6fea3 util.go:548: Successfully waited for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 to rollout in 45.075s
TestUpgradeControlPlane/Main/EnsureMachineDeploymentGeneration
0s
TestUpgradeControlPlane/Main/EnsureNoCrashingPods
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms
TestUpgradeControlPlane/Main/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:515: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-4wssd/control-plane-upgrade-ttbj6 in 100ms
TestUpgradeControlPlane/Main/Verifying_featureGate_status_has_entries_for_the_same_versions_as_clusterVersion
0s
control_plane_upgrade_test.go:107: Validation passed
TestUpgradeControlPlane/Main/Wait_for_control_plane_components_to_complete_rollout
0s
util.go:588: Successfully waited for control plane components to complete rollout in 13m0.075s
TestUpgradeControlPlane/ValidateHostedCluster
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 in 4m3.075s util.go:278: Successfully waited for kubeconfig secret to have data in 50ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-ttbj6.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:332: Successfully waited for a successful connection to the guest API server in 27.575s util.go:515: Successfully waited for 2 nodes to become ready in 12m3.075s util.go:548: Successfully waited for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 to rollout in 5m15.05s util.go:2724: Successfully waited for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 to have valid conditions in 50ms
TestUpgradeControlPlane/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:261: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-4wssd/control-plane-upgrade-ttbj6 in 50ms util.go:278: Successfully waited for kubeconfig secret to have data in 25ms
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:515: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-4wssd/control-plane-upgrade-ttbj6 in 200ms
TestUpgradeControlPlane/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s