PR #7903 - 03-10 13:02

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

438
Total Tests
298
Passed
126
Failed
14
Skipped

Failed Tests

TestAutoscaling
0s
hypershift_framework.go:475: Successfully created hostedcluster e2e-clusters-8h9ph/autoscaling-z5hdw in 14s util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8h9ph/autoscaling-z5hdw in 57s util.go:310: Successfully waited for kubeconfig secret to have data in 0s eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-z5hdw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-autoscaling-z5hdw.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-z5hdw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.94.65.115:443: i/o timeout eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-z5hdw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.111.210:443: i/o timeout util.go:372: Successfully waited for a successful connection to the guest API server in 2m28.025s util.go:567: Successfully waited for 1 nodes to become ready in 7m45s util.go:600: Successfully waited for HostedCluster e2e-clusters-8h9ph/autoscaling-z5hdw to rollout in 5m15s util.go:2981: Successfully waited for HostedCluster e2e-clusters-8h9ph/autoscaling-z5hdw to have valid conditions in 0s util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8h9ph/autoscaling-z5hdw-us-east-1a in 25ms util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8h9ph/autoscaling-z5hdw in 0s util.go:310: Successfully waited for kubeconfig secret to have data in 0s util.go:4127: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8h9ph/autoscaling-z5hdw in 0s util.go:310: Successfully waited for kubeconfig secret to have data in 0s util.go:372: Successfully waited for a successful connection to the guest API server in 0s util.go:567: Successfully waited for 1 nodes to become ready in 0s autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-8h9ph, name: autoscaling-z5hdw-us-east-1a, min: 1, max: 3 autoscaling_test.go:137: Created workload. Node: ip-10-0-11-247.ec2.internal, memcapacity: 14918692Ki util.go:567: Successfully waited for 3 nodes to become ready in 5m30s autoscaling_test.go:157: Deleted workload
TestAutoscaling/Main
0s
TestAutoscaling/Main/TestAutoscaling
0s
util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8h9ph/autoscaling-z5hdw in 0s util.go:310: Successfully waited for kubeconfig secret to have data in 0s util.go:372: Successfully waited for a successful connection to the guest API server in 0s util.go:567: Successfully waited for 1 nodes to become ready in 0s autoscaling_test.go:118: Enabled autoscaling. Namespace: e2e-clusters-8h9ph, name: autoscaling-z5hdw-us-east-1a, min: 1, max: 3 autoscaling_test.go:137: Created workload. Node: ip-10-0-11-247.ec2.internal, memcapacity: 14918692Ki util.go:567: Successfully waited for 3 nodes to become ready in 5m30s autoscaling_test.go:157: Deleted workload
TestAutoscaling/ValidateHostedCluster
0s
util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8h9ph/autoscaling-z5hdw in 57s util.go:310: Successfully waited for kubeconfig secret to have data in 0s eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-z5hdw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-autoscaling-z5hdw.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-z5hdw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.94.65.115:443: i/o timeout eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-z5hdw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.90.111.210:443: i/o timeout util.go:372: Successfully waited for a successful connection to the guest API server in 2m28.025s util.go:567: Successfully waited for 1 nodes to become ready in 7m45s util.go:600: Successfully waited for HostedCluster e2e-clusters-8h9ph/autoscaling-z5hdw to rollout in 5m15s util.go:2981: Successfully waited for HostedCluster e2e-clusters-8h9ph/autoscaling-z5hdw to have valid conditions in 0s
TestAutoscaling/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestAutoscaling/ValidateHostedCluster/EnsureNoCrashingPods
0s
TestAutoscaling/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8h9ph/autoscaling-z5hdw in 0s util.go:310: Successfully waited for kubeconfig secret to have data in 0s
TestAutoscaling/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-8h9ph/autoscaling-z5hdw-us-east-1a in 25ms
TestAutoscaling/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestAutoscaling/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4127: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestCreateClusterPrivateWithRouteKAS
42m21.62s
hypershift_framework.go:475: Successfully created hostedcluster e2e-clusters-mblj9/private-5nztj in 14s hypershift_framework.go:272: skipping teardown, already called util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mblj9/private-5nztj in 48s util.go:310: Successfully waited for kubeconfig secret to have data in 0s util.go:697: Successfully waited for NodePools for HostedCluster e2e-clusters-mblj9/private-5nztj to have all of their desired nodes in 11m33s util.go:600: Successfully waited for HostedCluster e2e-clusters-mblj9/private-5nztj to rollout in 4m54s util.go:2981: Successfully waited for HostedCluster e2e-clusters-mblj9/private-5nztj to have valid conditions in 0s
TestCreateClusterPrivateWithRouteKAS/Teardown
21m34.3s
journals.go:245: Successfully copied machine journals to /logs/artifacts/TestCreateClusterPrivateWithRouteKAS/machine-journals fixture.go:331: Failed to wait for infra resources in guest cluster to be deleted: operation error Resource Groups Tagging API: GetResources, context deadline exceeded hypershift_framework.go:491: archiving /logs/artifacts/TestCreateClusterPrivateWithRouteKAS/hostedcluster-private-5nztj to /logs/artifacts/TestCreateClusterPrivateWithRouteKAS/hostedcluster.tar.gz
TestNodePool
0s
hypershift_framework.go:475: Successfully created hostedcluster e2e-clusters-r4f9x/node-pool-nrpwq in 36s nodepool_test.go:154: tests only supported on platform KubeVirt hypershift_framework.go:475: Successfully created hostedcluster e2e-clusters-nv2ck/node-pool-4dksv in 20s hypershift_framework.go:272: skipping teardown, already called util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-nv2ck/node-pool-4dksv in 54s util.go:310: Successfully waited for kubeconfig secret to have data in 0s eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-4dksv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-4dksv.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-4dksv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 23.23.185.251:443: i/o timeout util.go:372: Successfully waited for a successful connection to the guest API server in 2m10.15s util.go:567: Successfully waited for 0 nodes to become ready in 0s util.go:2981: Successfully waited for HostedCluster e2e-clusters-nv2ck/node-pool-4dksv to have valid conditions in 2m30s util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-r4f9x/node-pool-nrpwq in 1m12s util.go:310: Successfully waited for kubeconfig secret to have data in 0s eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-nrpwq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-nrpwq.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:372: Successfully waited for a successful connection to the guest API server in 1m21.675s util.go:567: Successfully waited for 0 nodes to become ready in 25ms util.go:2981: Successfully waited for HostedCluster e2e-clusters-r4f9x/node-pool-nrpwq to have valid conditions in 3m3s util.go:567: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-nv2ck/node-pool-4dksv-us-east-1a in 25ms util.go:4127: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-nv2ck/node-pool-4dksv in 0s util.go:310: Successfully waited for kubeconfig secret to have data in 0s util.go:372: Successfully waited for a successful connection to the guest API server in 0s nodepool_additionalTrustBundlePropagation_test.go:40: Starting AdditionalTrustBundlePropagationTest. util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-nv2ck/node-pool-4dksv-test-additional-trust-bundle-propagation in 7m12s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-nv2ck/node-pool-4dksv-test-additional-trust-bundle-propagation to have correct status in 0s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-nv2ck/node-pool-4dksv-test-additional-trust-bundle-propagation to have correct status in 0s util.go:567: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-us-east-1b in 25ms util.go:4127: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-r4f9x/node-pool-nrpwq in 0s util.go:310: Successfully waited for kubeconfig secret to have data in 0s util.go:372: Successfully waited for a successful connection to the guest API server in 0s nodepool_kms_root_volume_test.go:42: Starting test KMSRootVolumeTest nodepool_kms_root_volume_test.go:54: retrieved KMS ARN: arn:aws:kms:us-east-1:820196288204:key/d3cdd9e0-3fd1-47a4-a559-72ae3672c5a6 util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-kms-root-volume in 10m0.075s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-kms-root-volume to have correct status in 0s nodepool_kms_root_volume_test.go:85: instanceID: i-0c66e41a1bca0c1b5 nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-kms-root-volume to have correct status in 0s util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-autorepair in 7m3s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-autorepair to have correct status in 0s nodepool_autorepair_test.go:65: Terminating AWS Instance with a autorepair NodePool nodepool_autorepair_test.go:70: Terminating AWS instance: i-08393f9a3abcb659d util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-autorepair having 1 available nodes without ip-10-0-5-125.ec2.internal in 8m21s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-autorepair to have correct status in 0s nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-machineconfig in 6m18s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-machineconfig to have correct status in 0s util.go:483: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-machineconfig to start config update in 15s util.go:499: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-machineconfig to finish config update in 8m0s nodepool_machineconfig_test.go:165: Successfully waited for all pods in the DaemonSet kube-system/machineconfig-update-checker-replace to be ready in 5s util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-machineconfig in 0s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-machineconfig to have correct status in 0s nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest util.go:567: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-replace in 8m48.025s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-replace to have correct status in 0s util.go:483: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-replace to start config update in 15s util.go:499: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-replace to finish config update in 8m20s nodepool_machineconfig_test.go:165: Successfully waited for all pods in the DaemonSet kube-system/node-pool-nrpwq-test-ntomachineconfig-replace to be ready in 0s util.go:567: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-replace in 0s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-replace to have correct status in 0s nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest util.go:567: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-inplace in 7m30.1s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-inplace to have correct status in 0s util.go:483: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-inplace to start config update in 15s util.go:499: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-inplace to finish config update in 4m20s nodepool_machineconfig_test.go:165: Successfully waited for all pods in the DaemonSet kube-system/node-pool-nrpwq-test-ntomachineconfig-inplace to be ready in 10s util.go:567: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-inplace in 0s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-inplace to have correct status in 0s nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-replaceupgrade in 8m33.075s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-replaceupgrade to have correct status in 0s nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-replaceupgrade to have version 4.22.0-0.ci-2026-03-09-080910 in 0s nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-fv2gl3ik/release@sha256:76d902450b4e400acb2a23947a5cd9ae41a64aa52f7c56ae505a025e22611106 nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-replaceupgrade to start the upgrade in 3s nodepool_upgrade_test.go:217: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-replaceupgrade to have version 4.22.0-0.ci-2026-03-10-131255-test-ci-op-fv2gl3ik-latest in 7m54s util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-replaceupgrade in 0s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-replaceupgrade to have correct status in 0s nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-inplaceupgrade in 12m18s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-inplaceupgrade to have correct status in 0s nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-inplaceupgrade to have version 4.22.0-0.ci-2026-03-09-080910 in 0s nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-fv2gl3ik/release@sha256:76d902450b4e400acb2a23947a5cd9ae41a64aa52f7c56ae505a025e22611106 nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-inplaceupgrade to start the upgrade in 3s nodepool_upgrade_test.go:217: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-inplaceupgrade to have version 4.22.0-0.ci-2026-03-10-131255-test-ci-op-fv2gl3ik-latest in 3m21s util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-inplaceupgrade in 18s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-inplaceupgrade to have correct status in 0s nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt util.go:567: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-rolling-upgrade in 8m48s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-rolling-upgrade to have correct status in 0s nodepool_rolling_upgrade_test.go:106: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-rolling-upgrade to start the rolling upgrade in 3s nodepool_rolling_upgrade_test.go:120: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-rolling-upgrade to finish the rolling upgrade in 12m45s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-rolling-upgrade to have correct status in 0s util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-day2-tags in 14m42s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-day2-tags to have correct status in 0s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-day2-tags to have correct status in 0s util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-spot-termination in 5m57.025s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-spot-termination to have correct status in 3s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-spot-termination to have correct status in 0s nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntoperformanceprofile in 6m48s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntoperformanceprofile to have correct status in 0s nodepool_nto_performanceprofile_test.go:80: Entering NTO PerformanceProfile test nodepool_nto_performanceprofile_test.go:110: Hosted control plane namespace is e2e-clusters-r4f9x-node-pool-nrpwq nodepool_nto_performanceprofile_test.go:112: Successfully waited for performance profile ConfigMap to exist with correct name labels and annotations in 3s nodepool_nto_performanceprofile_test.go:159: Successfully waited for performance profile status ConfigMap to exist in 0s nodepool_nto_performanceprofile_test.go:201: Successfully waited for performance profile status to be reflected under the NodePool status in 0s nodepool_nto_performanceprofile_test.go:254: Deleting configmap reference from nodepool ... nodepool_nto_performanceprofile_test.go:261: Successfully waited for performance profile ConfigMap to be deleted in 3s nodepool_nto_performanceprofile_test.go:280: Ending NTO PerformanceProfile test: OK nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntoperformanceprofile to have correct status in 30s nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-zkkpq in 8m0.025s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-zkkpq to have correct status in 0s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-zkkpq to have correct status in 0s nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-j56b2 in 9m9s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-j56b2 to have correct status in 0s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-j56b2 to have correct status in 0s nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-d9rft in 12m42s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-d9rft to have correct status in 0s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-d9rft to have correct status in 0s nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. nodepool_test.go:357: NodePool version is outside supported skew, validating condition only (skipping node readiness check) nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-85lx6 to have correct status in 6s nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-mirrorconfigs in 13m27s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-mirrorconfigs to have correct status in 0s nodepool_mirrorconfigs_test.go:81: Entering MirrorConfigs test nodepool_mirrorconfigs_test.go:111: Hosted control plane namespace is e2e-clusters-r4f9x-node-pool-nrpwq nodepool_mirrorconfigs_test.go:113: Successfully waited for kubeletConfig should be mirrored and present in the hosted cluster in 3s nodepool_mirrorconfigs_test.go:157: Deleting KubeletConfig configmap reference from nodepool ... nodepool_mirrorconfigs_test.go:163: Successfully waited for KubeletConfig configmap to be deleted in 3s nodepool_mirrorconfigs_test.go:101: Exiting MirrorConfigs test: OK nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-mirrorconfigs to have correct status in 4m24s nodepool_imagetype_test.go:51: Starting test NodePoolImageTypeTest util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-imagetype in 24m18s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-imagetype to have correct status in 0s nodepool_imagetype_test.go:76: Successfully waited for wait for nodepool e2e-clusters-r4f9x/node-pool-nrpwq-test-imagetype to have ValidPlatformImageType condition with settled status in 0s nodepool_imagetype_test.go:117: ValidPlatformImageType condition confirmed Windows AMI: Bootstrap Windows AMI is "ami-0a997f2085e8e29f0" nodepool_imagetype_test.go:133: Expected Windows LI AMI: ami-0a997f2085e8e29f0 nodepool_imagetype_test.go:141: Checking node ip-10-0-13-82.ec2.internal: OperatingSystem=linux, OSImage=Red Hat Enterprise Linux CoreOS 9.8.20260305-0 (Plow) nodepool_imagetype_test.go:170: Verifying EC2 instance i-05fa202eedcf950c0 for node ip-10-0-13-82.ec2.internal nodepool_imagetype_test.go:181: Node ip-10-0-13-82.ec2.internal is running on EC2 instance i-05fa202eedcf950c0 with AMI ami-0a997f2085e8e29f0 nodepool_imagetype_test.go:191: NodePoolImageTypeTest passed - Windows LI AMI validated at all layers (NodePool condition, Node OS, EC2 AMI) nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-imagetype to have correct status in 0s nodepool_spot_termination_handler_test.go:131: Adding SQS policy to NodePool role arn:aws:iam::820196288204:role/node-pool-nrpwq-node-pool nodepool_spot_termination_handler_test.go:158: Discovered SQS queue URL: https://sqs.us-east-1.amazonaws.com/820196288204/agarcial-nth-queue nodepool_spot_termination_handler_test.go:160: Adding SQS queue URL annotation to HostedCluster e2e-clusters-r4f9x/node-pool-nrpwq nodepool_spot_termination_handler_test.go:172: Waiting for aws-node-termination-handler deployment to be ready in namespace e2e-clusters-r4f9x-node-pool-nrpwq eventually.go:105: Failed to get *v1.Deployment: deployments.apps "aws-node-termination-handler" not found nodepool_spot_termination_handler_test.go:179: Successfully waited for Waiting for deployment e2e-clusters-r4f9x-node-pool-nrpwq/aws-node-termination-handler to be ready in 10s nodepool_spot_termination_handler_test.go:197: aws-node-termination-handler deployment is ready nodepool_spot_termination_handler_test.go:201: Waiting for spot MachineHealthCheck e2e-clusters-r4f9x-node-pool-nrpwq/node-pool-nrpwq-test-spot-termination-spot to be created nodepool_spot_termination_handler_test.go:208: Successfully waited for Waiting for MachineHealthCheck e2e-clusters-r4f9x-node-pool-nrpwq/node-pool-nrpwq-test-spot-termination-spot to be created with correct selector in 0s nodepool_spot_termination_handler_test.go:227: Spot MachineHealthCheck is created with correct label selector nodepool_spot_termination_handler_test.go:234: Found ready spot node: ip-10-0-4-121.ec2.internal with providerID: aws:///us-east-1b/i-0868f9804e9d80cc2 nodepool_spot_termination_handler_test.go:238: Sending EC2 Rebalance Recommendation event to SQS queue for instance i-0868f9804e9d80cc2 nodepool_spot_termination_handler_test.go:266: Successfully sent EC2 Rebalance Recommendation event to SQS queue nodepool_spot_termination_handler_test.go:269: Waiting for node ip-10-0-4-121.ec2.internal to have taint prefix aws-node-termination-handler/rebalance-recommendation nodepool_spot_termination_handler_test.go:270: Successfully waited for Waiting for node ip-10-0-4-121.ec2.internal to have rebalance recommendation taint in 5s nodepool_spot_termination_handler_test.go:288: Node ip-10-0-4-121.ec2.internal has the rebalance recommendation taint nodepool_spot_termination_handler_test.go:291: Cleaning up: removing SQS queue URL annotation from HostedCluster nodepool_spot_termination_handler_test.go:143: Cleaning up: removing SQS policy from NodePool role nodepool_additionalTrustBundlePropagation_test.go:74: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle nodepool_additionalTrustBundlePropagation_test.go:82: Successfully waited for Waiting for NodePool e2e-clusters-nv2ck/node-pool-4dksv-test-additional-trust-bundle-propagation to begin updating in 10s nodepool_additionalTrustBundlePropagation_test.go:96: Successfully waited for Waiting for NodePool e2e-clusters-nv2ck/node-pool-4dksv-test-additional-trust-bundle-propagation to stop updating in 10m30s nodepool_additionalTrustBundlePropagation_test.go:122: Successfully waited for user-ca-bundle to exist in guest cluster in 25ms nodepool_additionalTrustBundlePropagation_test.go:134: Updating hosted cluster by removing additional trust bundle. nodepool_additionalTrustBundlePropagation_test.go:148: Successfully waited for Waiting for control plane operator deployment to be updated in 0s nodepool_additionalTrustBundlePropagation_test.go:169: Successfully waited for Waiting for NodePool e2e-clusters-nv2ck/node-pool-4dksv-test-additional-trust-bundle-propagation to begin updating in 10s nodepool_additionalTrustBundlePropagation_test.go:183: Successfully waited for Waiting for NodePool e2e-clusters-nv2ck/node-pool-4dksv-test-additional-trust-bundle-propagation to stop updating in 10m50s nodepool_additionalTrustBundlePropagation_test.go:210: Successfully waited for *v1.ConfigMap openshift-config/user-ca-bundle to be deleted in 25ms util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-r4f9x/node-pool-nrpwq in 0s util.go:310: Successfully waited for kubeconfig secret to have data in 0s util.go:372: Successfully waited for a successful connection to the guest API server in 0s util.go:2981: Successfully waited for HostedCluster e2e-clusters-r4f9x/node-pool-nrpwq to have valid conditions in 0s util.go:3256: Successfully waited for HostedCluster e2e-clusters-r4f9x/node-pool-nrpwq to have valid Status.Payload in 0s util.go:1207: Connecting to kubernetes endpoint on: https://172.20.0.1:6443 util.go:4306: Validating node-tuning-operator metrics endpoint functionality in namespace e2e-clusters-r4f9x-node-pool-nrpwq (cluster has 20 worker replicas) util.go:4327: Service has metrics port configured on port 60000 util.go:4346: ServiceMonitor has metrics endpoint configured util.go:4359: Testing node-tuning-operator metrics endpoint accessibility... util.go:4360: - ServiceMonitor scheme: https util.go:4361: - ServiceMonitor targetPort: 60000 util.go:4382: ✓ Successfully retrieved metrics via ServiceMonitor HTTPS at https://node-tuning-operator.e2e-clusters-r4f9x-node-pool-nrpwq.svc.cluster.local:60000/metrics util.go:4386: ✅ Node-tuning-operator metrics endpoint validation completed successfully util.go:2555: Checking that all ValidatingAdmissionPolicies are present util.go:2581: Checking Denied KAS Requests for ValidatingAdmissionPolicies util.go:2597: Checking ClusterOperator status modifications are allowed journals.go:245: Successfully copied machine journals to /logs/artifacts/TestNodePool_HostedCluster0/machine-journals util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-nv2ck/node-pool-4dksv in 0s util.go:310: Successfully waited for kubeconfig secret to have data in 0s util.go:372: Successfully waited for a successful connection to the guest API server in 0s util.go:2981: Successfully waited for HostedCluster e2e-clusters-nv2ck/node-pool-4dksv to have valid conditions in 0s util.go:3256: Successfully waited for HostedCluster e2e-clusters-nv2ck/node-pool-4dksv to have valid Status.Payload in 0s util.go:1207: Connecting to kubernetes endpoint on: https://172.20.0.1:6443 util.go:4306: Validating node-tuning-operator metrics endpoint functionality in namespace e2e-clusters-nv2ck-node-pool-4dksv (cluster has 1 worker replicas) util.go:4327: Service has metrics port configured on port 60000 util.go:4346: ServiceMonitor has metrics endpoint configured util.go:4359: Testing node-tuning-operator metrics endpoint accessibility... util.go:4360: - ServiceMonitor scheme: https util.go:4361: - ServiceMonitor targetPort: 60000 util.go:4382: ✓ Successfully retrieved metrics via ServiceMonitor HTTPS at https://node-tuning-operator.e2e-clusters-nv2ck-node-pool-4dksv.svc.cluster.local:60000/metrics util.go:4386: ✅ Node-tuning-operator metrics endpoint validation completed successfully util.go:2555: Checking that all ValidatingAdmissionPolicies are present util.go:2581: Checking Denied KAS Requests for ValidatingAdmissionPolicies util.go:2597: Checking ClusterOperator status modifications are allowed journals.go:245: Successfully copied machine journals to /logs/artifacts/TestNodePool_HostedCluster2/machine-journals fixture.go:351: SUCCESS: found no remaining guest resources hypershift_framework.go:536: Destroyed cluster. Namespace: e2e-clusters-nv2ck, name: node-pool-4dksv hypershift_framework.go:491: archiving /logs/artifacts/TestNodePool_HostedCluster2/hostedcluster-node-pool-4dksv to /logs/artifacts/TestNodePool_HostedCluster2/hostedcluster.tar.gz
TestNodePool/HostedCluster0
0s
hypershift_framework.go:475: Successfully created hostedcluster e2e-clusters-r4f9x/node-pool-nrpwq in 36s
TestNodePool/HostedCluster0/EnsureHostedCluster
0s
util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-r4f9x/node-pool-nrpwq in 0s util.go:310: Successfully waited for kubeconfig secret to have data in 0s util.go:372: Successfully waited for a successful connection to the guest API server in 0s util.go:2981: Successfully waited for HostedCluster e2e-clusters-r4f9x/node-pool-nrpwq to have valid conditions in 0s
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsureHCPPodsAffinitiesAndTolerations
0s
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsureNetworkPolicies
0s
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsureNetworkPolicies/EnsureComponentsHaveNeedManagementKASAccessLabel
0s
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
0s
util.go:1207: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsureNodeTuningOperatorMetricsEndpoint
0s
util.go:4306: Validating node-tuning-operator metrics endpoint functionality in namespace e2e-clusters-r4f9x-node-pool-nrpwq (cluster has 20 worker replicas) util.go:4327: Service has metrics port configured on port 60000 util.go:4346: ServiceMonitor has metrics endpoint configured util.go:4359: Testing node-tuning-operator metrics endpoint accessibility... util.go:4360: - ServiceMonitor scheme: https util.go:4361: - ServiceMonitor targetPort: 60000 util.go:4382: ✓ Successfully retrieved metrics via ServiceMonitor HTTPS at https://node-tuning-operator.e2e-clusters-r4f9x-node-pool-nrpwq.svc.cluster.local:60000/metrics util.go:4386: ✅ Node-tuning-operator metrics endpoint validation completed successfully
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:3256: Successfully waited for HostedCluster e2e-clusters-r4f9x/node-pool-nrpwq to have valid Status.Payload in 0s
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsureSATokenNotMountedUnlessNecessary
0s
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesCheckDeniedRequests
0s
util.go:2581: Checking Denied KAS Requests for ValidatingAdmissionPolicies
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesDontBlockStatusModifications
0s
util.go:2597: Checking ClusterOperator status modifications are allowed
TestNodePool/HostedCluster0/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesExists
0s
util.go:2555: Checking that all ValidatingAdmissionPolicies are present
TestNodePool/HostedCluster0/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
TestNodePool/HostedCluster0/EnsureHostedCluster/ValidateMetricsAreExposed
0s
TestNodePool/HostedCluster0/Main
0s
util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-r4f9x/node-pool-nrpwq in 0s util.go:310: Successfully waited for kubeconfig secret to have data in 0s util.go:372: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster0/Main/KubeVirtCacheTest
0s
nodepool_kv_cache_image_test.go:42: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtJsonPatchTest
0s
nodepool_kv_jsonpatch_test.go:42: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtNodeMultinetTest
0s
nodepool_kv_multinet_test.go:36: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtNodeSelectorTest
0s
nodepool_kv_nodeselector_test.go:48: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/KubeVirtQoSClassGuaranteedTest
0s
nodepool_kv_qos_guaranteed_test.go:43: test only supported on platform KubeVirt
TestNodePool/HostedCluster0/Main/OpenStackAdvancedTest
0s
nodepool_osp_advanced_test.go:53: Starting test OpenStackAdvancedTest nodepool_osp_advanced_test.go:56: test only supported on platform OpenStack
TestNodePool/HostedCluster0/Main/TestImageTypes
0s
nodepool_imagetype_test.go:51: Starting test NodePoolImageTypeTest util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-imagetype in 24m18s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-imagetype to have correct status in 0s nodepool_imagetype_test.go:76: Successfully waited for wait for nodepool e2e-clusters-r4f9x/node-pool-nrpwq-test-imagetype to have ValidPlatformImageType condition with settled status in 0s nodepool_imagetype_test.go:117: ValidPlatformImageType condition confirmed Windows AMI: Bootstrap Windows AMI is "ami-0a997f2085e8e29f0" nodepool_imagetype_test.go:133: Expected Windows LI AMI: ami-0a997f2085e8e29f0 nodepool_imagetype_test.go:141: Checking node ip-10-0-13-82.ec2.internal: OperatingSystem=linux, OSImage=Red Hat Enterprise Linux CoreOS 9.8.20260305-0 (Plow) nodepool_imagetype_test.go:170: Verifying EC2 instance i-05fa202eedcf950c0 for node ip-10-0-13-82.ec2.internal nodepool_imagetype_test.go:181: Node ip-10-0-13-82.ec2.internal is running on EC2 instance i-05fa202eedcf950c0 with AMI ami-0a997f2085e8e29f0 nodepool_imagetype_test.go:191: NodePoolImageTypeTest passed - Windows LI AMI validated at all layers (NodePool condition, Node OS, EC2 AMI) nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-imagetype to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestKMSRootVolumeEncryption
0s
nodepool_kms_root_volume_test.go:42: Starting test KMSRootVolumeTest nodepool_kms_root_volume_test.go:54: retrieved KMS ARN: arn:aws:kms:us-east-1:820196288204:key/d3cdd9e0-3fd1-47a4-a559-72ae3672c5a6 util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-kms-root-volume in 10m0.075s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-kms-root-volume to have correct status in 0s nodepool_kms_root_volume_test.go:85: instanceID: i-0c66e41a1bca0c1b5 nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-kms-root-volume to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestMirrorConfigs
0s
nodepool_mirrorconfigs_test.go:60: Starting test MirrorConfigsTest util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-mirrorconfigs in 13m27s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-mirrorconfigs to have correct status in 0s nodepool_mirrorconfigs_test.go:81: Entering MirrorConfigs test nodepool_mirrorconfigs_test.go:111: Hosted control plane namespace is e2e-clusters-r4f9x-node-pool-nrpwq nodepool_mirrorconfigs_test.go:113: Successfully waited for kubeletConfig should be mirrored and present in the hosted cluster in 3s nodepool_mirrorconfigs_test.go:157: Deleting KubeletConfig configmap reference from nodepool ... nodepool_mirrorconfigs_test.go:163: Successfully waited for KubeletConfig configmap to be deleted in 3s nodepool_mirrorconfigs_test.go:101: Exiting MirrorConfigs test: OK nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-mirrorconfigs to have correct status in 4m24s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace
0s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest util.go:567: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-inplace in 7m30.1s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-inplace to have correct status in 0s util.go:483: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-inplace to start config update in 15s util.go:499: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-inplace to finish config update in 4m20s nodepool_machineconfig_test.go:165: Successfully waited for all pods in the DaemonSet kube-system/node-pool-nrpwq-test-ntomachineconfig-inplace to be ready in 10s util.go:567: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-inplace in 0s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-inplace to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace/EnsureHCPContainersHaveResourceRequests
0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace/EnsureNoCrashingPods
0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace/EnsureNoPodsWithTooHighPriority
0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut
0s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest util.go:567: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-replace in 8m48.025s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-replace to have correct status in 0s util.go:483: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-replace to start config update in 15s util.go:499: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-replace to finish config update in 8m20s nodepool_machineconfig_test.go:165: Successfully waited for all pods in the DaemonSet kube-system/node-pool-nrpwq-test-ntomachineconfig-replace to be ready in 0s util.go:567: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-replace in 0s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntomachineconfig-replace to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut/EnsureHCPContainersHaveResourceRequests
0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut/EnsureNoCrashingPods
0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut/EnsureNoPodsWithTooHighPriority
0s
TestNodePool/HostedCluster0/Main/TestNTOPerformanceProfile
0s
nodepool_nto_performanceprofile_test.go:59: Starting test NTOPerformanceProfileTest util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntoperformanceprofile in 6m48s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntoperformanceprofile to have correct status in 0s nodepool_nto_performanceprofile_test.go:80: Entering NTO PerformanceProfile test nodepool_nto_performanceprofile_test.go:110: Hosted control plane namespace is e2e-clusters-r4f9x-node-pool-nrpwq nodepool_nto_performanceprofile_test.go:112: Successfully waited for performance profile ConfigMap to exist with correct name labels and annotations in 3s nodepool_nto_performanceprofile_test.go:159: Successfully waited for performance profile status ConfigMap to exist in 0s nodepool_nto_performanceprofile_test.go:201: Successfully waited for performance profile status to be reflected under the NodePool status in 0s nodepool_nto_performanceprofile_test.go:254: Deleting configmap reference from nodepool ... nodepool_nto_performanceprofile_test.go:261: Successfully waited for performance profile ConfigMap to be deleted in 3s nodepool_nto_performanceprofile_test.go:280: Ending NTO PerformanceProfile test: OK nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-ntoperformanceprofile to have correct status in 30s
TestNodePool/HostedCluster0/Main/TestNodePoolAutoRepair
0s
util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-autorepair in 7m3s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-autorepair to have correct status in 0s nodepool_autorepair_test.go:65: Terminating AWS Instance with a autorepair NodePool nodepool_autorepair_test.go:70: Terminating AWS instance: i-08393f9a3abcb659d util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-autorepair having 1 available nodes without ip-10-0-5-125.ec2.internal in 8m21s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-autorepair to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodePoolDay2Tags
0s
util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-day2-tags in 14m42s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-day2-tags to have correct status in 0s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-day2-tags to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodePoolInPlaceUpgrade
0s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-inplaceupgrade in 12m18s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-inplaceupgrade to have correct status in 0s nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-inplaceupgrade to have version 4.22.0-0.ci-2026-03-09-080910 in 0s nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-fv2gl3ik/release@sha256:76d902450b4e400acb2a23947a5cd9ae41a64aa52f7c56ae505a025e22611106 nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-inplaceupgrade to start the upgrade in 3s nodepool_upgrade_test.go:217: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-inplaceupgrade to have version 4.22.0-0.ci-2026-03-10-131255-test-ci-op-fv2gl3ik-latest in 3m21s util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-inplaceupgrade in 18s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-inplaceupgrade to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN1
0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-zkkpq in 8m0.025s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-zkkpq to have correct status in 0s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-zkkpq to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN2
0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-j56b2 in 9m9s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-j56b2 to have correct status in 0s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-j56b2 to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN3
0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-d9rft in 12m42s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-d9rft to have correct status in 0s nodepool_prev_release_test.go:57: NodePoolPrevReleaseCreateTest tests the creation of a NodePool with previous OCP release. nodepool_prev_release_test.go:59: Validating all Nodes have the synced labels and taints nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-d9rft to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN4
0s
nodepool_prev_release_test.go:33: Starting NodePoolPrevReleaseCreateTest. nodepool_test.go:357: NodePool version is outside supported skew, validating condition only (skipping node readiness check) nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-85lx6 to have correct status in 6s
TestNodePool/HostedCluster0/Main/TestNodePoolReplaceUpgrade
0s
nodepool_upgrade_test.go:99: starting test NodePoolUpgradeTest util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-replaceupgrade in 8m33.075s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-replaceupgrade to have correct status in 0s nodepool_upgrade_test.go:177: Validating all Nodes have the synced labels and taints nodepool_upgrade_test.go:180: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-replaceupgrade to have version 4.22.0-0.ci-2026-03-09-080910 in 0s nodepool_upgrade_test.go:197: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-fv2gl3ik/release@sha256:76d902450b4e400acb2a23947a5cd9ae41a64aa52f7c56ae505a025e22611106 nodepool_upgrade_test.go:204: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-replaceupgrade to start the upgrade in 3s nodepool_upgrade_test.go:217: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-replaceupgrade to have version 4.22.0-0.ci-2026-03-10-131255-test-ci-op-fv2gl3ik-latest in 7m54s util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-replaceupgrade in 0s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-replaceupgrade to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout
0s
nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-machineconfig in 6m18s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-machineconfig to have correct status in 0s util.go:483: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-machineconfig to start config update in 15s util.go:499: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-machineconfig to finish config update in 8m0s nodepool_machineconfig_test.go:165: Successfully waited for all pods in the DaemonSet kube-system/machineconfig-update-checker-replace to be ready in 5s util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-machineconfig in 0s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-machineconfig to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout/EnsureHCPContainersHaveResourceRequests
0s
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout/EnsureNoCrashingPods
0s
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout/EnsureNoPodsWithTooHighPriority
0s
TestNodePool/HostedCluster0/Main/TestRollingUpgrade
0s
util.go:567: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-rolling-upgrade in 8m48s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-rolling-upgrade to have correct status in 0s nodepool_rolling_upgrade_test.go:106: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-rolling-upgrade to start the rolling upgrade in 3s nodepool_rolling_upgrade_test.go:120: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-rolling-upgrade to finish the rolling upgrade in 12m45s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-rolling-upgrade to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestSpotTerminationHandler
0s
util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-spot-termination in 5m57.025s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-spot-termination to have correct status in 3s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-test-spot-termination to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestSpotTerminationHandler/SpotTerminationHandlerTest
0s
nodepool_spot_termination_handler_test.go:131: Adding SQS policy to NodePool role arn:aws:iam::820196288204:role/node-pool-nrpwq-node-pool nodepool_spot_termination_handler_test.go:158: Discovered SQS queue URL: https://sqs.us-east-1.amazonaws.com/820196288204/agarcial-nth-queue nodepool_spot_termination_handler_test.go:160: Adding SQS queue URL annotation to HostedCluster e2e-clusters-r4f9x/node-pool-nrpwq nodepool_spot_termination_handler_test.go:172: Waiting for aws-node-termination-handler deployment to be ready in namespace e2e-clusters-r4f9x-node-pool-nrpwq eventually.go:105: Failed to get *v1.Deployment: deployments.apps "aws-node-termination-handler" not found nodepool_spot_termination_handler_test.go:179: Successfully waited for Waiting for deployment e2e-clusters-r4f9x-node-pool-nrpwq/aws-node-termination-handler to be ready in 10s nodepool_spot_termination_handler_test.go:197: aws-node-termination-handler deployment is ready nodepool_spot_termination_handler_test.go:201: Waiting for spot MachineHealthCheck e2e-clusters-r4f9x-node-pool-nrpwq/node-pool-nrpwq-test-spot-termination-spot to be created nodepool_spot_termination_handler_test.go:208: Successfully waited for Waiting for MachineHealthCheck e2e-clusters-r4f9x-node-pool-nrpwq/node-pool-nrpwq-test-spot-termination-spot to be created with correct selector in 0s nodepool_spot_termination_handler_test.go:227: Spot MachineHealthCheck is created with correct label selector nodepool_spot_termination_handler_test.go:234: Found ready spot node: ip-10-0-4-121.ec2.internal with providerID: aws:///us-east-1b/i-0868f9804e9d80cc2 nodepool_spot_termination_handler_test.go:238: Sending EC2 Rebalance Recommendation event to SQS queue for instance i-0868f9804e9d80cc2 nodepool_spot_termination_handler_test.go:266: Successfully sent EC2 Rebalance Recommendation event to SQS queue nodepool_spot_termination_handler_test.go:269: Waiting for node ip-10-0-4-121.ec2.internal to have taint prefix aws-node-termination-handler/rebalance-recommendation nodepool_spot_termination_handler_test.go:270: Successfully waited for Waiting for node ip-10-0-4-121.ec2.internal to have rebalance recommendation taint in 5s nodepool_spot_termination_handler_test.go:288: Node ip-10-0-4-121.ec2.internal has the rebalance recommendation taint nodepool_spot_termination_handler_test.go:291: Cleaning up: removing SQS queue URL annotation from HostedCluster nodepool_spot_termination_handler_test.go:143: Cleaning up: removing SQS policy from NodePool role
TestNodePool/HostedCluster0/Teardown
0s
journals.go:245: Successfully copied machine journals to /logs/artifacts/TestNodePool_HostedCluster0/machine-journals
TestNodePool/HostedCluster0/ValidateHostedCluster
0s
util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-r4f9x/node-pool-nrpwq in 1m12s util.go:310: Successfully waited for kubeconfig secret to have data in 0s eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-nrpwq.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-nrpwq.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host util.go:372: Successfully waited for a successful connection to the guest API server in 1m21.675s util.go:567: Successfully waited for 0 nodes to become ready in 25ms util.go:2981: Successfully waited for HostedCluster e2e-clusters-r4f9x/node-pool-nrpwq to have valid conditions in 3m3s
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureNoCrashingPods
0s
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:567: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-r4f9x/node-pool-nrpwq-us-east-1b in 25ms
TestNodePool/HostedCluster0/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestNodePool/HostedCluster0/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4127: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestNodePool/HostedCluster1
0s
nodepool_test.go:154: tests only supported on platform KubeVirt
TestNodePool/HostedCluster2
0s
hypershift_framework.go:475: Successfully created hostedcluster e2e-clusters-nv2ck/node-pool-4dksv in 20s hypershift_framework.go:272: skipping teardown, already called
TestNodePool/HostedCluster2/EnsureHostedCluster
0s
util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-nv2ck/node-pool-4dksv in 0s util.go:310: Successfully waited for kubeconfig secret to have data in 0s util.go:372: Successfully waited for a successful connection to the guest API server in 0s util.go:2981: Successfully waited for HostedCluster e2e-clusters-nv2ck/node-pool-4dksv to have valid conditions in 0s
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsureAllContainersHavePullPolicyIfNotPresent
0s
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsureAllContainersHaveTerminationMessagePolicyFallbackToLogsOnError
0s
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsureAllRoutesUseHCPRouter
0s
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsureHCPContainersHaveResourceRequests
0s
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsureHCPPodsAffinitiesAndTolerations
0s
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsureNetworkPolicies
0s
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsureNetworkPolicies/EnsureComponentsHaveNeedManagementKASAccessLabel
0s
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsureNetworkPolicies/EnsureLimitedEgressTrafficToManagementKAS
0s
util.go:1207: Connecting to kubernetes endpoint on: https://172.20.0.1:6443
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsureNoPodsWithTooHighPriority
0s
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsureNoRapidDeploymentRollouts
0s
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsureNodeTuningOperatorMetricsEndpoint
0s
util.go:4306: Validating node-tuning-operator metrics endpoint functionality in namespace e2e-clusters-nv2ck-node-pool-4dksv (cluster has 1 worker replicas) util.go:4327: Service has metrics port configured on port 60000 util.go:4346: ServiceMonitor has metrics endpoint configured util.go:4359: Testing node-tuning-operator metrics endpoint accessibility... util.go:4360: - ServiceMonitor scheme: https util.go:4361: - ServiceMonitor targetPort: 60000 util.go:4382: ✓ Successfully retrieved metrics via ServiceMonitor HTTPS at https://node-tuning-operator.e2e-clusters-nv2ck-node-pool-4dksv.svc.cluster.local:60000/metrics util.go:4386: ✅ Node-tuning-operator metrics endpoint validation completed successfully
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsurePayloadArchSetCorrectly
0s
util.go:3256: Successfully waited for HostedCluster e2e-clusters-nv2ck/node-pool-4dksv to have valid Status.Payload in 0s
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsurePodsWithEmptyDirPVsHaveSafeToEvictAnnotations
0s
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsureReadOnlyRootFilesystem
0s
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsureReadOnlyRootFilesystemTmpDirMount
0s
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsureSATokenNotMountedUnlessNecessary
0s
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesCheckDeniedRequests
0s
util.go:2581: Checking Denied KAS Requests for ValidatingAdmissionPolicies
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesDontBlockStatusModifications
0s
util.go:2597: Checking ClusterOperator status modifications are allowed
TestNodePool/HostedCluster2/EnsureHostedCluster/EnsureValidatingAdmissionPoliciesExists
0s
util.go:2555: Checking that all ValidatingAdmissionPolicies are present
TestNodePool/HostedCluster2/EnsureHostedCluster/NoticePreemptionOrFailedScheduling
0s
TestNodePool/HostedCluster2/EnsureHostedCluster/ValidateMetricsAreExposed
0s
TestNodePool/HostedCluster2/Main
0s
util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-nv2ck/node-pool-4dksv in 0s util.go:310: Successfully waited for kubeconfig secret to have data in 0s util.go:372: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation
0s
nodepool_additionalTrustBundlePropagation_test.go:40: Starting AdditionalTrustBundlePropagationTest. util.go:567: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-nv2ck/node-pool-4dksv-test-additional-trust-bundle-propagation in 7m12s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-nv2ck/node-pool-4dksv-test-additional-trust-bundle-propagation to have correct status in 0s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-nv2ck/node-pool-4dksv-test-additional-trust-bundle-propagation to have correct status in 0s
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation/AdditionalTrustBundlePropagationTest
0s
nodepool_additionalTrustBundlePropagation_test.go:74: Updating hosted cluster with additional trust bundle. Bundle: additional-trust-bundle nodepool_additionalTrustBundlePropagation_test.go:82: Successfully waited for Waiting for NodePool e2e-clusters-nv2ck/node-pool-4dksv-test-additional-trust-bundle-propagation to begin updating in 10s nodepool_additionalTrustBundlePropagation_test.go:96: Successfully waited for Waiting for NodePool e2e-clusters-nv2ck/node-pool-4dksv-test-additional-trust-bundle-propagation to stop updating in 10m30s nodepool_additionalTrustBundlePropagation_test.go:122: Successfully waited for user-ca-bundle to exist in guest cluster in 25ms nodepool_additionalTrustBundlePropagation_test.go:134: Updating hosted cluster by removing additional trust bundle. nodepool_additionalTrustBundlePropagation_test.go:148: Successfully waited for Waiting for control plane operator deployment to be updated in 0s nodepool_additionalTrustBundlePropagation_test.go:169: Successfully waited for Waiting for NodePool e2e-clusters-nv2ck/node-pool-4dksv-test-additional-trust-bundle-propagation to begin updating in 10s nodepool_additionalTrustBundlePropagation_test.go:183: Successfully waited for Waiting for NodePool e2e-clusters-nv2ck/node-pool-4dksv-test-additional-trust-bundle-propagation to stop updating in 10m50s nodepool_additionalTrustBundlePropagation_test.go:210: Successfully waited for *v1.ConfigMap openshift-config/user-ca-bundle to be deleted in 25ms
TestNodePool/HostedCluster2/PostTeardown
0s
TestNodePool/HostedCluster2/PostTeardown/ValidateMetricsAreExposed
0s
TestNodePool/HostedCluster2/Teardown
0s
journals.go:245: Successfully copied machine journals to /logs/artifacts/TestNodePool_HostedCluster2/machine-journals fixture.go:351: SUCCESS: found no remaining guest resources hypershift_framework.go:536: Destroyed cluster. Namespace: e2e-clusters-nv2ck, name: node-pool-4dksv hypershift_framework.go:491: archiving /logs/artifacts/TestNodePool_HostedCluster2/hostedcluster-node-pool-4dksv to /logs/artifacts/TestNodePool_HostedCluster2/hostedcluster.tar.gz
TestNodePool/HostedCluster2/ValidateHostedCluster
0s
util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-nv2ck/node-pool-4dksv in 54s util.go:310: Successfully waited for kubeconfig secret to have data in 0s eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-4dksv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-node-pool-4dksv.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-node-pool-4dksv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 23.23.185.251:443: i/o timeout util.go:372: Successfully waited for a successful connection to the guest API server in 2m10.15s util.go:567: Successfully waited for 0 nodes to become ready in 0s util.go:2981: Successfully waited for HostedCluster e2e-clusters-nv2ck/node-pool-4dksv to have valid conditions in 2m30s
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureNoCrashingPods
0s
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:567: Successfully waited for 0 nodes to become ready for NodePool e2e-clusters-nv2ck/node-pool-4dksv-us-east-1a in 25ms
TestNodePool/HostedCluster2/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestNodePool/HostedCluster2/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4127: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster
TestUpgradeControlPlane
0s
control_plane_upgrade_test.go:25: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-fv2gl3ik/release@sha256:86bbeda15abe8a095b4cac5927ded34d3fac08bdfc92f6c80a19e667280e6c58, toImage: registry.build01.ci.openshift.org/ci-op-fv2gl3ik/release@sha256:76d902450b4e400acb2a23947a5cd9ae41a64aa52f7c56ae505a025e22611106 hypershift_framework.go:475: Successfully created hostedcluster e2e-clusters-f65gh/control-plane-upgrade-h5g88 in 27s util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-f65gh/control-plane-upgrade-h5g88 in 1m21s util.go:310: Successfully waited for kubeconfig secret to have data in 0s eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-h5g88.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-h5g88.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-h5g88.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.224.148.33:443: i/o timeout eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-h5g88.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.236.61:443: i/o timeout util.go:372: Successfully waited for a successful connection to the guest API server in 1m51.025s util.go:567: Successfully waited for 2 nodes to become ready in 9m54s util.go:600: Successfully waited for HostedCluster e2e-clusters-f65gh/control-plane-upgrade-h5g88 to rollout in 5m18s util.go:2981: Successfully waited for HostedCluster e2e-clusters-f65gh/control-plane-upgrade-h5g88 to have valid conditions in 3m30s util.go:567: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-f65gh/control-plane-upgrade-h5g88-us-east-1c in 25ms util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-f65gh/control-plane-upgrade-h5g88 in 0s util.go:310: Successfully waited for kubeconfig secret to have data in 0s util.go:4127: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-f65gh/control-plane-upgrade-h5g88 in 0s util.go:310: Successfully waited for kubeconfig secret to have data in 0s util.go:372: Successfully waited for a successful connection to the guest API server in 0s control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-fv2gl3ik/release@sha256:76d902450b4e400acb2a23947a5cd9ae41a64aa52f7c56ae505a025e22611106
TestUpgradeControlPlane/Main
0s
util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-f65gh/control-plane-upgrade-h5g88 in 0s util.go:310: Successfully waited for kubeconfig secret to have data in 0s util.go:372: Successfully waited for a successful connection to the guest API server in 0s control_plane_upgrade_test.go:52: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-fv2gl3ik/release@sha256:76d902450b4e400acb2a23947a5cd9ae41a64aa52f7c56ae505a025e22611106
TestUpgradeControlPlane/Main/Wait_for_control_plane_components_to_complete_rollout
0s
TestUpgradeControlPlane/ValidateHostedCluster
0s
util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-f65gh/control-plane-upgrade-h5g88 in 1m21s util.go:310: Successfully waited for kubeconfig secret to have data in 0s eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-h5g88.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-h5g88.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-h5g88.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.224.148.33:443: i/o timeout eventually.go:105: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-h5g88.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.50.236.61:443: i/o timeout util.go:372: Successfully waited for a successful connection to the guest API server in 1m51.025s util.go:567: Successfully waited for 2 nodes to become ready in 9m54s util.go:600: Successfully waited for HostedCluster e2e-clusters-f65gh/control-plane-upgrade-h5g88 to rollout in 5m18s util.go:2981: Successfully waited for HostedCluster e2e-clusters-f65gh/control-plane-upgrade-h5g88 to have valid conditions in 3m30s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureGuestWebhooksValidated
0s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNoCrashingPods
0s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNodeCommunication
0s
util.go:293: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-f65gh/control-plane-upgrade-h5g88 in 0s util.go:310: Successfully waited for kubeconfig secret to have data in 0s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNodeCountMatchesNodePoolReplicas
0s
util.go:567: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-f65gh/control-plane-upgrade-h5g88-us-east-1c in 25ms
TestUpgradeControlPlane/ValidateHostedCluster/EnsureOAPIMountsTrustBundle
0s
TestUpgradeControlPlane/ValidateHostedCluster/ValidateConfigurationStatus
0s
util.go:4127: Successfully validated configuration authentication status consistency across HCP, HC, and guest cluster