PR #6965 - 11-10 22:00

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

222
Total Tests
91
Passed
116
Failed
15
Skipped

Failed Tests

TestAutoscaling
0s
hypershift_framework.go:423: Successfully created hostedcluster e2e-clusters-p2lvg/autoscaling-t277t in 2m27s util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-p2lvg/autoscaling-t277t in 2m33.075s util.go:298: Successfully waited for kubeconfig secret to have data in 50ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-t277t.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-t277t.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-t277t.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:360: Successfully waited for a successful connection to the guest API server in 30.475s util.go:542: Successfully waited for 1 nodes to become ready in 7m33.05s util.go:575: Successfully waited for HostedCluster e2e-clusters-p2lvg/autoscaling-t277t to rollout in 3m33.075s util.go:2836: Successfully waited for HostedCluster e2e-clusters-p2lvg/autoscaling-t277t to have valid conditions in 50ms util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-p2lvg/autoscaling-t277t in 125ms util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-p2lvg/autoscaling-t277t in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-p2lvg/autoscaling-t277t in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:3971: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-p2lvg/autoscaling-t277t in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 25ms util.go:542: Successfully waited for 1 nodes to become ready in 50ms autoscaling_test.go:107: Enabled autoscaling. Namespace: e2e-clusters-p2lvg, name: autoscaling-t277t, min: 1, max: 3 autoscaling_test.go:126: Created workload. Node: autoscaling-t277t-wlgdn-tb2qt, memcapacity: 15221256Ki util.go:542: Successfully waited for 3 nodes to become ready in 7m15.05s autoscaling_test.go:146: Deleted workload
TestAutoscaling/Main
0s
TestAutoscaling/Main/TestAutoscaling
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-p2lvg/autoscaling-t277t in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 25ms util.go:542: Successfully waited for 1 nodes to become ready in 50ms autoscaling_test.go:107: Enabled autoscaling. Namespace: e2e-clusters-p2lvg, name: autoscaling-t277t, min: 1, max: 3 autoscaling_test.go:126: Created workload. Node: autoscaling-t277t-wlgdn-tb2qt, memcapacity: 15221256Ki util.go:542: Successfully waited for 3 nodes to become ready in 7m15.05s autoscaling_test.go:146: Deleted workload
TestAutoscaling/ValidateHostedCluster
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-p2lvg/autoscaling-t277t in 2m33.075s util.go:298: Successfully waited for kubeconfig secret to have data in 50ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-t277t.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-t277t.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-t277t.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:360: Successfully waited for a successful connection to the guest API server in 30.475s util.go:542: Successfully waited for 1 nodes to become ready in 7m33.05s util.go:575: Successfully waited for HostedCluster e2e-clusters-p2lvg/autoscaling-t277t to rollout in 3m33.075s util.go:2836: Successfully waited for HostedCluster e2e-clusters-p2lvg/autoscaling-t277t to have valid conditions in 50ms
TestAutoscaling/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestAutoscaling/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-p2lvg/autoscaling-t277t in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms
TestAutoscaling/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-p2lvg/autoscaling-t277t in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms
TestAutoscaling/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-p2lvg/autoscaling-t277t in 125ms
TestAutoscaling/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestAutoscaling/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:3971: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestAzureScheduler
0s
hypershift_framework.go:423: Successfully created hostedcluster e2e-clusters-zrfcc/azure-scheduler-rqxhv in 2m32s util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv in 2m36.125s util.go:298: Successfully waited for kubeconfig secret to have data in 50ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-rqxhv.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-rqxhv.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-rqxhv.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-rqxhv.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:360: Successfully waited for a successful connection to the guest API server in 30.5s util.go:542: Successfully waited for 2 nodes to become ready in 9m0.075s util.go:575: Successfully waited for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv to rollout in 3m42.05s util.go:2836: Successfully waited for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv to have valid conditions in 50ms util.go:542: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-zrfcc/azure-scheduler-rqxhv in 250ms util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv in 75ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:3971: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 25ms util.go:542: Successfully waited for 2 nodes to become ready in 50ms azure_scheduler_test.go:111: Updated clusterSizingConfig. azure_scheduler_test.go:158: Successfully waited for HostedCluster size label and annotations updated in 50ms azure_scheduler_test.go:150: Scaled Nodepool. Namespace: e2e-clusters-zrfcc, name: azure-scheduler-rqxhv, replicas: 0xc003db0300 util.go:542: Successfully waited for 3 nodes to become ready in 6m48.075s azure_scheduler_test.go:158: Successfully waited for HostedCluster size label and annotations updated in 75ms azure_scheduler_test.go:182: Successfully waited for control-plane-operator pod is running with expected resource request in 50ms util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 25ms util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 25ms util.go:2836: Successfully waited for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv to have valid conditions in 50ms util.go:3111: Successfully waited for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv to have valid Status.Payload in 75ms util.go:1110: test only supported on AWS platform, saw Azure util.go:2419: Checking that all ValidatingAdmissionPolicies are present util.go:2445: Checking Denied KAS Requests for ValidatingAdmissionPolicies util.go:2461: Checking ClusterOperator status modifications are allowed util.go:3798: All 45 pods in namespace e2e-clusters-zrfcc-azure-scheduler-rqxhv have the expected RunAsUser UID 1008
TestAzureScheduler/EnsureHostedCluster
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 25ms util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 25ms util.go:2836: Successfully waited for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv to have valid conditions in 50ms
TestAzureScheduler/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestAzureScheduler/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestAzureScheduler/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestAzureScheduler/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestAzureScheduler/EnsureHostedCluster/EnsureNetworkPolicies
0s
util.go:1110: test only supported on AWS platform, saw Azure
TestAzureScheduler/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestAzureScheduler/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestAzureScheduler/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:3111: Successfully waited for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv to have valid Status.Payload in 75ms
TestAzureScheduler/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestAzureScheduler/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestAzureScheduler/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestAzureScheduler/EnsureHostedCluster/EnsureSATokenNotMountedUnlessNecessary
0s
TestAzureScheduler/EnsureHostedCluster/EnsureSecurityContextUID
0s
util.go:3798: All 45 pods in namespace e2e-clusters-zrfcc-azure-scheduler-rqxhv have the expected RunAsUser UID 1008
TestAzureScheduler/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesCheckDeniedRequests
0s
util.go:2445: Checking Denied KAS Requests for ValidatingAdmissionPolicies
TestAzureScheduler/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesDontBlockStatusModifications
0s
util.go:2461: Checking ClusterOperator status modifications are allowed
TestAzureScheduler/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesExists
0s
util.go:2419: Checking that all ValidatingAdmissionPolicies are present
TestAzureScheduler/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
TestAzureScheduler/EnsureHostedCluster/ValidateMetricsAreExposed
0s
TestAzureScheduler/Main
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 25ms util.go:542: Successfully waited for 2 nodes to become ready in 50ms azure_scheduler_test.go:111: Updated clusterSizingConfig. azure_scheduler_test.go:158: Successfully waited for HostedCluster size label and annotations updated in 50ms azure_scheduler_test.go:150: Scaled Nodepool. Namespace: e2e-clusters-zrfcc, name: azure-scheduler-rqxhv, replicas: 0xc003db0300 util.go:542: Successfully waited for 3 nodes to become ready in 6m48.075s azure_scheduler_test.go:158: Successfully waited for HostedCluster size label and annotations updated in 75ms azure_scheduler_test.go:182: Successfully waited for control-plane-operator pod is running with expected resource request in 50ms
TestAzureScheduler/Teardown
0s
TestAzureScheduler/ValidateHostedCluster
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv in 2m36.125s util.go:298: Successfully waited for kubeconfig secret to have data in 50ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-rqxhv.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-rqxhv.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-rqxhv.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-rqxhv.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:360: Successfully waited for a successful connection to the guest API server in 30.5s util.go:542: Successfully waited for 2 nodes to become ready in 9m0.075s util.go:575: Successfully waited for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv to rollout in 3m42.05s util.go:2836: Successfully waited for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv to have valid conditions in 50ms
TestAzureScheduler/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestAzureScheduler/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms
TestAzureScheduler/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-zrfcc/azure-scheduler-rqxhv in 75ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms
TestAzureScheduler/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:542: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-zrfcc/azure-scheduler-rqxhv in 250ms
TestAzureScheduler/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestAzureScheduler/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:3971: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestCreateCluster
32m23.69s
hypershift_framework.go:423: Successfully created hostedcluster e2e-clusters-4mhd4/create-cluster-cjk7c in 2m7s hypershift_framework.go:484: Destroyed cluster. Namespace: e2e-clusters-4mhd4, name: create-cluster-cjk7c hypershift_framework.go:439: archiving /logs/artifacts/TestCreateCluster/hostedcluster-create-cluster-cjk7c to /logs/artifacts/TestCreateCluster/hostedcluster.tar.gz util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-4mhd4/create-cluster-cjk7c in 2m30.075s util.go:298: Successfully waited for kubeconfig secret to have data in 50ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-cjk7c.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:360: Successfully waited for a successful connection to the guest API server in 11.2s util.go:542: Successfully waited for 2 nodes to become ready in 8m36.075s util.go:575: Successfully waited for HostedCluster e2e-clusters-4mhd4/create-cluster-cjk7c to rollout in 3m18.075s util.go:2836: Successfully waited for HostedCluster e2e-clusters-4mhd4/create-cluster-cjk7c to have valid conditions in 50ms
TestCreateCluster/Main
4m24.44s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-4mhd4/create-cluster-cjk7c in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 25ms create_cluster_test.go:2225: fetching mgmt kubeconfig util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-4mhd4/create-cluster-cjk7c in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms
TestCreateCluster/Main/EnsureNoMetricsTargetsAreDown
690ms
util.go:4045: No Pods found for PodMonitor e2e-clusters-4mhd4-create-cluster-cjk7c/cluster-autoscaler, skipping
TestNodePool
0s
hypershift_framework.go:423: Successfully created hostedcluster e2e-clusters-8dgf2/node-pool-5q69p in 2m21s nodepool_test.go:143: tests only supported on platform KubeVirt hypershift_framework.go:423: Successfully created hostedcluster e2e-clusters-6tz4d/node-pool-4p26f in 2m29s util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dgf2/node-pool-5q69p in 2m27.075s util.go:298: Successfully waited for kubeconfig secret to have data in 50ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-5q69p.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:360: Successfully waited for a successful connection to the guest API server in 12.2s util.go:542: Successfully waited for 0 nodes to become ready in 75ms util.go:2836: Successfully waited for HostedCluster e2e-clusters-8dgf2/node-pool-5q69p to have valid conditions in 6m30.075s util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6tz4d/node-pool-4p26f in 2m18.05s util.go:298: Successfully waited for kubeconfig secret to have data in 50ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-4p26f.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:360: Successfully waited for a successful connection to the guest API server in 12.2s util.go:542: Successfully waited for 0 nodes to become ready in 75ms util.go:2836: Successfully waited for HostedCluster e2e-clusters-6tz4d/node-pool-4p26f to have valid conditions in 6m30.125s util.go:542: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-6tz4d/node-pool-4p26f in 150ms util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6tz4d/node-pool-4p26f in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:542: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p in 150ms util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dgf2/node-pool-5q69p in 75ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:3971: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dgf2/node-pool-5q69p in 125ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 625ms nodepool_kms_root_volume_test.go:40: test only supported on platform AWS nodepool_autorepair_test.go:43: test only supported on platform AWS nodepool_machineconfig_test.go:55: Starting test NodePoolMachineconfigRolloutTest util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-machineconfig in 9m21.1s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-machineconfig to have correct status in 50ms util.go:456: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-machineconfig to start config update in 15.05s nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest util.go:542: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntomachineconfig-replace in 9m48.05s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntomachineconfig-replace to have correct status in 50ms util.go:456: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntomachineconfig-replace to start config update in 15.05s nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest util.go:542: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntomachineconfig-inplace in 15m0.075s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntomachineconfig-inplace to have correct status in 50ms util.go:456: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntomachineconfig-inplace to start config update in 45.1s nodepool_upgrade_test.go:100: starting test NodePoolUpgradeTest util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-replaceupgrade in 15m9.075s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-replaceupgrade to have correct status in 50ms nodepool_upgrade_test.go:160: Validating all Nodes have the synced labels and taints nodepool_upgrade_test.go:163: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-replaceupgrade to have version 4.21.0-0.ci-2025-11-10-155627 in 50ms nodepool_upgrade_test.go:180: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-p925k8ww/release@sha256:4e5740a5bf69a014eff82dc72b254aa0d1aa5b125dbbfeeb97be23a83719cc0d nodepool_upgrade_test.go:187: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-replaceupgrade to start the upgrade in 3.05s nodepool_upgrade_test.go:100: starting test NodePoolUpgradeTest util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-inplaceupgrade in 14m57.075s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-inplaceupgrade to have correct status in 50ms nodepool_upgrade_test.go:160: Validating all Nodes have the synced labels and taints nodepool_upgrade_test.go:163: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-inplaceupgrade to have version 4.21.0-0.ci-2025-11-10-155627 in 50ms nodepool_upgrade_test.go:180: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-p925k8ww/release@sha256:4e5740a5bf69a014eff82dc72b254aa0d1aa5b125dbbfeeb97be23a83719cc0d nodepool_upgrade_test.go:187: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-inplaceupgrade to start the upgrade in 3.05s nodepool_kv_cache_image_test.go:43: test only supported on platform KubeVirt util.go:542: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-rolling-upgrade in 15m12.075s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-rolling-upgrade to have correct status in 50ms nodepool_rolling_upgrade_test.go:89: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-rolling-upgrade to start the rolling upgrade in 3.075s nodepool_day2_tags_test.go:44: test only supported on platform AWS nodepool_kv_qos_guaranteed_test.go:44: test only supported on platform KubeVirt nodepool_kv_jsonpatch_test.go:43: test only supported on platform KubeVirt nodepool_kv_nodeselector_test.go:49: test only supported on platform KubeVirt nodepool_kv_multinet_test.go:37: test only supported on platform KubeVirt nodepool_osp_advanced_test.go:54: Starting test OpenStackAdvancedTest nodepool_osp_advanced_test.go:57: test only supported on platform OpenStack nodepool_nto_performanceprofile_test.go:60: Starting test NTOPerformanceProfileTest util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntoperformanceprofile in 9m12.15s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntoperformanceprofile to have correct status in 50ms nodepool_nto_performanceprofile_test.go:81: Entering NTO PerformanceProfile test nodepool_nto_performanceprofile_test.go:111: Hosted control plane namespace is e2e-clusters-8dgf2-node-pool-5q69p nodepool_nto_performanceprofile_test.go:113: Successfully waited for performance profile ConfigMap to exist with correct name labels and annotations in 3.05s nodepool_nto_performanceprofile_test.go:160: Successfully waited for performance profile status ConfigMap to exist in 50ms nodepool_nto_performanceprofile_test.go:202: Successfully waited for performance profile status to be reflected under the NodePool status in 50ms nodepool_nto_performanceprofile_test.go:255: Deleting configmap reference from nodepool ... nodepool_nto_performanceprofile_test.go:262: Successfully waited for performance profile ConfigMap to be deleted in 3.05s nodepool_nto_performanceprofile_test.go:281: Ending NTO PerformanceProfile test: OK nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntoperformanceprofile to have correct status in 50ms nodepool_prev_release_test.go:31: Starting NodePoolPrevReleaseCreateTest. util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-rtwqh in 15m3.1s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-rtwqh to have correct status in 50ms nodepool_prev_release_test.go:55: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:57: Validating all Nodes have the synced labels and taints nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-rtwqh to have correct status in 50ms nodepool_prev_release_test.go:31: Starting NodePoolPrevReleaseCreateTest. util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-4zwqd in 15m3.075s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-4zwqd to have correct status in 50ms nodepool_prev_release_test.go:55: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:57: Validating all Nodes have the synced labels and taints nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-4zwqd to have correct status in 50ms nodepool_mirrorconfigs_test.go:61: Starting test MirrorConfigsTest util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-mirrorconfigs in 8m57.075s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-mirrorconfigs to have correct status in 50ms nodepool_mirrorconfigs_test.go:82: Entering MirrorConfigs test nodepool_mirrorconfigs_test.go:112: Hosted control plane namespace is e2e-clusters-8dgf2-node-pool-5q69p nodepool_mirrorconfigs_test.go:114: Successfully waited for kubeletConfig should be mirrored and present in the hosted cluster in 3.025s nodepool_mirrorconfigs_test.go:158: Deleting KubeletConfig configmap reference from nodepool ... nodepool_mirrorconfigs_test.go:164: Successfully waited for KubeletConfig configmap to be deleted in 3.025s nodepool_mirrorconfigs_test.go:102: Exiting MirrorConfigs test: OK nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-mirrorconfigs to have correct status in 50ms nodepool_imagetype_test.go:46: test is only supported for AWS platform util.go:3971: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6tz4d/node-pool-4p26f in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 375ms nodepool_additionalTrustBundlePropagation_test.go:39: Starting AdditionalTrustBundlePropagationTest. util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6tz4d/node-pool-4p26f-test-additional-trust-bundle-propagation in 5m33.05s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-6tz4d/node-pool-4p26f-test-additional-trust-bundle-propagation to have correct status in 50ms nodepool_additionalTrustBundlePropagation_test.go:73: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle nodepool_additionalTrustBundlePropagation_test.go:81: Successfully waited for Waiting for NodePool e2e-clusters-6tz4d/node-pool-4p26f-test-additional-trust-bundle-propagation to begin updating in 10.05s
TestNodePool/HostedCluster0
0s
hypershift_framework.go:423: Successfully created hostedcluster e2e-clusters-8dgf2/node-pool-5q69p in 2m21s
TestNodePool/HostedCluster0/Main
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dgf2/node-pool-5q69p in 125ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 625ms
TestNodePool/HostedCluster0/Main/KubeVirtCacheTest
0s
nodepool_kv_cache_image_test.go:43: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtJsonPatchTest
0s
nodepool_kv_jsonpatch_test.go:43: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtNodeMultinetTest
0s
nodepool_kv_multinet_test.go:37: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtNodeSelectorTest
0s
nodepool_kv_nodeselector_test.go:49: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtQoSClassGuaranteedTest
0s
nodepool_kv_qos_guaranteed_test.go:44: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/OpenStackAdvancedTest
0s
nodepool_osp_advanced_test.go:54: Starting test OpenStackAdvancedTest nodepool_osp_advanced_test.go:57: test only supported on platform OpenStack
TestNodePool/HostedCluster0/Main/TestImageTypes
0s
nodepool_imagetype_test.go:46: test is only supported for AWS platform
TestNodePool/HostedCluster0/Main/TestKMSRootVolumeEncryption
0s
nodepool_kms_root_volume_test.go:40: test only supported on platform AWS
TestNodePool/HostedCluster0/Main/TestMirrorConfigs
0s
nodepool_mirrorconfigs_test.go:61: Starting test MirrorConfigsTest util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-mirrorconfigs in 8m57.075s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-mirrorconfigs to have correct status in 50ms nodepool_mirrorconfigs_test.go:82: Entering MirrorConfigs test nodepool_mirrorconfigs_test.go:112: Hosted control plane namespace is e2e-clusters-8dgf2-node-pool-5q69p nodepool_mirrorconfigs_test.go:114: Successfully waited for kubeletConfig should be mirrored and present in the hosted cluster in 3.025s nodepool_mirrorconfigs_test.go:158: Deleting KubeletConfig configmap reference from nodepool ... nodepool_mirrorconfigs_test.go:164: Successfully waited for KubeletConfig configmap to be deleted in 3.025s nodepool_mirrorconfigs_test.go:102: Exiting MirrorConfigs test: OK nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-mirrorconfigs to have correct status in 50ms
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace
0s
nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest util.go:542: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntomachineconfig-inplace in 15m0.075s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntomachineconfig-inplace to have correct status in 50ms util.go:456: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntomachineconfig-inplace to start config update in 45.1s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut
0s
nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest util.go:542: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntomachineconfig-replace in 9m48.05s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntomachineconfig-replace to have correct status in 50ms util.go:456: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntomachineconfig-replace to start config update in 15.05s
TestNodePool/HostedCluster0/Main/TestNTOPerformanceProfile
0s
nodepool_nto_performanceprofile_test.go:60: Starting test NTOPerformanceProfileTest util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntoperformanceprofile in 9m12.15s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntoperformanceprofile to have correct status in 50ms nodepool_nto_performanceprofile_test.go:81: Entering NTO PerformanceProfile test nodepool_nto_performanceprofile_test.go:111: Hosted control plane namespace is e2e-clusters-8dgf2-node-pool-5q69p nodepool_nto_performanceprofile_test.go:113: Successfully waited for performance profile ConfigMap to exist with correct name labels and annotations in 3.05s nodepool_nto_performanceprofile_test.go:160: Successfully waited for performance profile status ConfigMap to exist in 50ms nodepool_nto_performanceprofile_test.go:202: Successfully waited for performance profile status to be reflected under the NodePool status in 50ms nodepool_nto_performanceprofile_test.go:255: Deleting configmap reference from nodepool ... nodepool_nto_performanceprofile_test.go:262: Successfully waited for performance profile ConfigMap to be deleted in 3.05s nodepool_nto_performanceprofile_test.go:281: Ending NTO PerformanceProfile test: OK nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-ntoperformanceprofile to have correct status in 50ms
TestNodePool/HostedCluster0/Main/TestNodePoolAutoRepair
0s
nodepool_autorepair_test.go:43: test only supported on platform AWS
TestNodePool/HostedCluster0/Main/TestNodePoolDay2Tags
0s
nodepool_day2_tags_test.go:44: test only supported on platform AWS
TestNodePool/HostedCluster0/Main/TestNodePoolInPlaceUpgrade
0s
nodepool_upgrade_test.go:100: starting test NodePoolUpgradeTest util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-inplaceupgrade in 14m57.075s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-inplaceupgrade to have correct status in 50ms nodepool_upgrade_test.go:160: Validating all Nodes have the synced labels and taints nodepool_upgrade_test.go:163: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-inplaceupgrade to have version 4.21.0-0.ci-2025-11-10-155627 in 50ms nodepool_upgrade_test.go:180: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-p925k8ww/release@sha256:4e5740a5bf69a014eff82dc72b254aa0d1aa5b125dbbfeeb97be23a83719cc0d nodepool_upgrade_test.go:187: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-inplaceupgrade to start the upgrade in 3.05s
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN1
0s
nodepool_prev_release_test.go:31: Starting NodePoolPrevReleaseCreateTest. util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-rtwqh in 15m3.1s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-rtwqh to have correct status in 50ms nodepool_prev_release_test.go:55: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:57: Validating all Nodes have the synced labels and taints nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-rtwqh to have correct status in 50ms
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN2
0s
nodepool_prev_release_test.go:31: Starting NodePoolPrevReleaseCreateTest. util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-4zwqd in 15m3.075s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-4zwqd to have correct status in 50ms nodepool_prev_release_test.go:55: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:57: Validating all Nodes have the synced labels and taints nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-4zwqd to have correct status in 50ms
TestNodePool/HostedCluster0/Main/TestNodePoolReplaceUpgrade
0s
nodepool_upgrade_test.go:100: starting test NodePoolUpgradeTest util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-replaceupgrade in 15m9.075s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-replaceupgrade to have correct status in 50ms nodepool_upgrade_test.go:160: Validating all Nodes have the synced labels and taints nodepool_upgrade_test.go:163: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-replaceupgrade to have version 4.21.0-0.ci-2025-11-10-155627 in 50ms nodepool_upgrade_test.go:180: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-p925k8ww/release@sha256:4e5740a5bf69a014eff82dc72b254aa0d1aa5b125dbbfeeb97be23a83719cc0d nodepool_upgrade_test.go:187: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-replaceupgrade to start the upgrade in 3.05s
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout
0s
nodepool_machineconfig_test.go:55: Starting test NodePoolMachineconfigRolloutTest util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-machineconfig in 9m21.1s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-machineconfig to have correct status in 50ms util.go:456: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-machineconfig to start config update in 15.05s
TestNodePool/HostedCluster0/Main/TestRollingUpgrade
0s
util.go:542: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-rolling-upgrade in 15m12.075s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-rolling-upgrade to have correct status in 50ms nodepool_rolling_upgrade_test.go:89: Successfully waited for NodePool e2e-clusters-8dgf2/node-pool-5q69p-test-rolling-upgrade to start the rolling upgrade in 3.075s
TestNodePool/HostedCluster0/ValidateHostedCluster
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dgf2/node-pool-5q69p in 2m27.075s util.go:298: Successfully waited for kubeconfig secret to have data in 50ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-5q69p.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:360: Successfully waited for a successful connection to the guest API server in 12.2s util.go:542: Successfully waited for 0 nodes to become ready in 75ms util.go:2836: Successfully waited for HostedCluster e2e-clusters-8dgf2/node-pool-5q69p to have valid conditions in 6m30.075s
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8dgf2/node-pool-5q69p in 75ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:542: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-8dgf2/node-pool-5q69p in 150ms
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestNodePool/HostedCluster0/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:3971: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestNodePool/HostedCluster1
0s
nodepool_test.go:143: tests only supported on platform KubeVirt
TestNodePool/HostedCluster2
0s
hypershift_framework.go:423: Successfully created hostedcluster e2e-clusters-6tz4d/node-pool-4p26f in 2m29s
TestNodePool/HostedCluster2/Main
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6tz4d/node-pool-4p26f in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 375ms
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation
0s
nodepool_additionalTrustBundlePropagation_test.go:39: Starting AdditionalTrustBundlePropagationTest. util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-6tz4d/node-pool-4p26f-test-additional-trust-bundle-propagation in 5m33.05s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-6tz4d/node-pool-4p26f-test-additional-trust-bundle-propagation to have correct status in 50ms
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation/AdditionalTrustBundlePropagationTest
0s
nodepool_additionalTrustBundlePropagation_test.go:73: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle nodepool_additionalTrustBundlePropagation_test.go:81: Successfully waited for Waiting for NodePool e2e-clusters-6tz4d/node-pool-4p26f-test-additional-trust-bundle-propagation to begin updating in 10.05s
TestNodePool/HostedCluster2/ValidateHostedCluster
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6tz4d/node-pool-4p26f in 2m18.05s util.go:298: Successfully waited for kubeconfig secret to have data in 50ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-4p26f.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:360: Successfully waited for a successful connection to the guest API server in 12.2s util.go:542: Successfully waited for 0 nodes to become ready in 75ms util.go:2836: Successfully waited for HostedCluster e2e-clusters-6tz4d/node-pool-4p26f to have valid conditions in 6m30.125s
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-6tz4d/node-pool-4p26f in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:542: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-6tz4d/node-pool-4p26f in 150ms
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestNodePool/HostedCluster2/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:3971: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestUpgradeControlPlane
0s
control_plane_upgrade_test.go:26: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-p925k8ww/release@sha256:d6199378041c08e869fab28d0918bade31516e5e120c2ce69b866dc8f0c0dceb, toImage: registry.build01.ci.openshift.org/ci-op-p925k8ww/release@sha256:4e5740a5bf69a014eff82dc72b254aa0d1aa5b125dbbfeeb97be23a83719cc0d hypershift_framework.go:423: Successfully created hostedcluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 2m17s util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 3m15.05s util.go:298: Successfully waited for kubeconfig secret to have data in 50ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-7p7tz.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-7p7tz.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-7p7tz.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF util.go:360: Successfully waited for a successful connection to the guest API server in 33.875s util.go:542: Successfully waited for 2 nodes to become ready in 10m3.075s util.go:575: Successfully waited for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz to rollout in 3m51.075s util.go:2836: Successfully waited for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz to have valid conditions in 50ms util.go:542: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 125ms util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 75ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:3971: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 25ms control_plane_upgrade_test.go:48: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-p925k8ww/release@sha256:4e5740a5bf69a014eff82dc72b254aa0d1aa5b125dbbfeeb97be23a83719cc0d util.go:575: Successfully waited for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz to rollout in 1m21.075s util.go:615: Successfully waited for control plane components to complete rollout in 10m50.125s util.go:542: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 100ms util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 25ms util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 25ms util.go:2836: Successfully waited for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz to have valid conditions in 50ms util.go:3111: Successfully waited for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz to have valid Status.Payload in 75ms util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient cpu, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:1110: test only supported on AWS platform, saw Azure util.go:2419: Checking that all ValidatingAdmissionPolicies are present util.go:2445: Checking Denied KAS Requests for ValidatingAdmissionPolicies util.go:2461: Checking ClusterOperator status modifications are allowed util.go:3798: All 69 pods in namespace e2e-clusters-x2krs-control-plane-upgrade-7p7tz have the expected RunAsUser UID 1004
TestUpgradeControlPlane/EnsureHostedCluster
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 25ms util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 25ms util.go:2836: Successfully waited for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz to have valid conditions in 50ms
TestUpgradeControlPlane/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureNetworkPolicies
0s
util.go:1110: test only supported on AWS platform, saw Azure
TestUpgradeControlPlane/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:3111: Successfully waited for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz to have valid Status.Payload in 75ms
TestUpgradeControlPlane/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureSATokenNotMountedUnlessNecessary
0s
TestUpgradeControlPlane/EnsureHostedCluster/EnsureSecurityContextUID
0s
util.go:3798: All 69 pods in namespace e2e-clusters-x2krs-control-plane-upgrade-7p7tz have the expected RunAsUser UID 1004
TestUpgradeControlPlane/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesCheckDeniedRequests
0s
util.go:2445: Checking Denied KAS Requests for ValidatingAdmissionPolicies
TestUpgradeControlPlane/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesDontBlockStatusModifications
0s
util.go:2461: Checking ClusterOperator status modifications are allowed
TestUpgradeControlPlane/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesExists
0s
util.go:2419: Checking that all ValidatingAdmissionPolicies are present
TestUpgradeControlPlane/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient cpu, 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient cpu, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 6 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never. util.go:799: error: non-fatal, observed FailedScheduling or Preempted event: 0/6 nodes are available: 1 Insufficient memory, 5 node(s) didn't match pod anti-affinity rules. preemption: not eligible due to preemptionPolicy=Never.
TestUpgradeControlPlane/EnsureHostedCluster/ValidateMetricsAreExposed
0s
TestUpgradeControlPlane/Main
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms util.go:360: Successfully waited for a successful connection to the guest API server in 25ms control_plane_upgrade_test.go:48: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-p925k8ww/release@sha256:4e5740a5bf69a014eff82dc72b254aa0d1aa5b125dbbfeeb97be23a83719cc0d util.go:575: Successfully waited for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz to rollout in 1m21.075s
TestUpgradeControlPlane/Main/EnsureFeatureGateStatus
0s
TestUpgradeControlPlane/Main/EnsureMachineDeploymentGeneration
0s
TestUpgradeControlPlane/Main/EnsureNoCrashingPods
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms
TestUpgradeControlPlane/Main/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:542: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 100ms
TestUpgradeControlPlane/Main/Wait_for_control_plane_components_to_complete_rollout
0s
util.go:615: Successfully waited for control plane components to complete rollout in 10m50.125s
TestUpgradeControlPlane/Teardown
0s
TestUpgradeControlPlane/ValidateHostedCluster
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 3m15.05s util.go:298: Successfully waited for kubeconfig secret to have data in 50ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-7p7tz.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-7p7tz.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-7p7tz.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF util.go:360: Successfully waited for a successful connection to the guest API server in 33.875s util.go:542: Successfully waited for 2 nodes to become ready in 10m3.075s util.go:575: Successfully waited for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz to rollout in 3m51.075s util.go:2836: Successfully waited for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz to have valid conditions in 50ms
TestUpgradeControlPlane/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNoCrashingPods
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 50ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 75ms util.go:298: Successfully waited for kubeconfig secret to have data in 50ms
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:542: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-x2krs/control-plane-upgrade-7p7tz in 125ms
TestUpgradeControlPlane/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestUpgradeControlPlane/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:3971: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster