PR #5792 - 04-14 10:27

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

48
Total Tests
14
Passed
19
Failed
15
Skipped

Failed Tests

TestAzureScheduler
43m13.05s
hypershift_framework.go:316: Successfully created hostedcluster e2e-clusters-7tglk/azure-scheduler-wtn6h in 1m43s hypershift_framework.go:115: Summarizing unexpected conditions for HostedCluster azure-scheduler-wtn6h eventually.go:104: Failed to get *v1beta1.HostedCluster: client rate limiter Wait returned an error: context deadline exceeded util.go:2123: Failed to wait for HostedCluster e2e-clusters-7tglk/azure-scheduler-wtn6h to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-7tglk/azure-scheduler-wtn6h invalid at RV 19157 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.19.0-0.ci-2025-04-14-104353-test-ci-op-8wcy8f5p-latest: some cluster operators are not available) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion hypershift_framework.go:194: skipping postTeardown() hypershift_framework.go:175: skipping teardown, already called
TestAzureScheduler/ValidateHostedCluster
32m35.51s
util.go:218: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-7tglk/azure-scheduler-wtn6h in 1m36.025s util.go:235: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-wtn6h.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-wtn6h.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-wtn6h.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-wtn6h.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:281: Successfully waited for a successful connection to the guest API server in 59.475s util.go:462: Failed to wait for 2 nodes to become ready in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 2 nodes, got 0
TestCreateCluster
44m11.22s
hypershift_framework.go:316: Successfully created hostedcluster e2e-clusters-jk7cl/create-cluster-qtklm in 1m34s hypershift_framework.go:115: Summarizing unexpected conditions for HostedCluster create-cluster-qtklm eventually.go:104: Failed to get *v1beta1.HostedCluster: client rate limiter Wait returned an error: context deadline exceeded util.go:2123: Failed to wait for HostedCluster e2e-clusters-jk7cl/create-cluster-qtklm to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-jk7cl/create-cluster-qtklm invalid at RV 21912 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorsNotAvailable(Cluster operators console, dns, image-registry, ingress, insights, kube-storage-version-migrator, monitoring, node-tuning, openshift-samples, service-ca, storage are not available) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorsNotAvailable(Unable to apply 4.19.0-0.ci-2025-04-14-104353-test-ci-op-8wcy8f5p-latest: some cluster operators are not available) hypershift_framework.go:194: skipping postTeardown() hypershift_framework.go:175: skipping teardown, already called
TestCreateCluster/ValidateHostedCluster
32m33.34s
util.go:218: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-jk7cl/create-cluster-qtklm in 1m56.025s util.go:235: Successfully waited for kubeconfig secret to have data in 25ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-qtklm.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": net/http: TLS handshake timeout util.go:281: Successfully waited for a successful connection to the guest API server in 37.3s eventually.go:258: Failed to get **v1.Node: context deadline exceeded util.go:462: Failed to wait for 2 nodes to become ready in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 2 nodes, got 0
TestNodePool
0s
TestNodePool/HostedCluster0
47m20.06s
hypershift_framework.go:316: Successfully created hostedcluster e2e-clusters-gcn77/node-pool-rjqxl in 1m32s hypershift_framework.go:115: Summarizing unexpected conditions for HostedCluster node-pool-rjqxl util.go:2123: Successfully waited for HostedCluster e2e-clusters-gcn77/node-pool-rjqxl to have valid conditions in 25ms hypershift_framework.go:194: skipping postTeardown() hypershift_framework.go:175: skipping teardown, already called
TestNodePool/HostedCluster0/Main
160ms
util.go:218: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-gcn77/node-pool-rjqxl in 25ms util.go:235: Successfully waited for kubeconfig secret to have data in 25ms util.go:281: Successfully waited for a successful connection to the guest API server in 100ms
TestNodePool/HostedCluster0/Main/TestMirrorConfigs
30m0.03s
nodepool_mirrorconfigs_test.go:61: Starting test MirrorConfigsTest eventually.go:258: Failed to get **v1.Node: context deadline exceeded util.go:462: Failed to wait for 1 nodes to become ready for NodePool e2e-clusters-gcn77/node-pool-rjqxl-test-mirrorconfigs in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 1 nodes, got 0
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace
30m0.03s
nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest util.go:462: Failed to wait for 2 nodes to become ready for NodePool e2e-clusters-gcn77/node-pool-rjqxl-test-ntomachineconfig-inplace in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 2 nodes, got 0
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut
30m0.03s
nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest eventually.go:258: Failed to get **v1.Node: context deadline exceeded util.go:462: Failed to wait for 2 nodes to become ready for NodePool e2e-clusters-gcn77/node-pool-rjqxl-test-ntomachineconfig-replace in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 2 nodes, got 0
TestNodePool/HostedCluster0/Main/TestNTOPerformanceProfile
30m0.03s
nodepool_nto_performanceprofile_test.go:60: Starting test NTOPerformanceProfileTest eventually.go:258: Failed to get **v1.Node: Get "https://api-node-pool-rjqxl.aks-e2e.hypershift.azure.devcluster.openshift.com:443/api/v1/nodes?labelSelector=hypershift.openshift.io%2FnodePool%3Dnode-pool-rjqxl-test-ntoperformanceprofile": context deadline exceeded util.go:462: Failed to wait for 1 nodes to become ready for NodePool e2e-clusters-gcn77/node-pool-rjqxl-test-ntoperformanceprofile in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 1 nodes, got 0
TestNodePool/HostedCluster0/Main/TestNodePoolInPlaceUpgrade
30m0.03s
nodepool_upgrade_test.go:100: starting test NodePoolUpgradeTest util.go:462: Failed to wait for 1 nodes to become ready for NodePool e2e-clusters-gcn77/node-pool-rjqxl-test-inplaceupgrade in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 1 nodes, got 0
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN1
30m0.03s
nodepool_prev_release_test.go:31: Starting NodePoolPrevReleaseCreateTest. eventually.go:258: Failed to get **v1.Node: context deadline exceeded util.go:462: Failed to wait for 1 nodes to become ready for NodePool e2e-clusters-gcn77/node-pool-rjqxl-8grv7 in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 1 nodes, got 0
TestNodePool/HostedCluster0/Main/TestNodePoolPrevReleaseN2
30m0.03s
nodepool_prev_release_test.go:31: Starting NodePoolPrevReleaseCreateTest. eventually.go:258: Failed to get **v1.Node: context deadline exceeded util.go:462: Failed to wait for 1 nodes to become ready for NodePool e2e-clusters-gcn77/node-pool-rjqxl-bs4d7 in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 1 nodes, got 0
TestNodePool/HostedCluster0/Main/TestNodePoolReplaceUpgrade
30m0.03s
nodepool_upgrade_test.go:100: starting test NodePoolUpgradeTest util.go:462: Failed to wait for 1 nodes to become ready for NodePool e2e-clusters-gcn77/node-pool-rjqxl-test-replaceupgrade in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 1 nodes, got 0
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout
30m0.03s
nodepool_machineconfig_test.go:55: Starting test NodePoolMachineconfigRolloutTest util.go:462: Failed to wait for 1 nodes to become ready for NodePool e2e-clusters-gcn77/node-pool-rjqxl-test-machineconfig in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 1 nodes, got 0
TestNodePool/HostedCluster2
49m25.99s
hypershift_framework.go:316: Successfully created hostedcluster e2e-clusters-g65cv/node-pool-l4m2w in 1m35s hypershift_framework.go:115: Summarizing unexpected conditions for HostedCluster node-pool-l4m2w util.go:2123: Successfully waited for HostedCluster e2e-clusters-g65cv/node-pool-l4m2w to have valid conditions in 25ms hypershift_framework.go:194: skipping postTeardown() hypershift_framework.go:175: skipping teardown, already called
TestNodePool/HostedCluster2/Main
70ms
util.go:218: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g65cv/node-pool-l4m2w in 25ms util.go:235: Successfully waited for kubeconfig secret to have data in 25ms util.go:281: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster2/Main/TestAdditionalTrustBundlePropagation
30m0.02s
nodepool_additionalTrustBundlePropagation_test.go:36: Starting AdditionalTrustBundlePropagationTest. util.go:462: Failed to wait for 1 nodes to become ready for NodePool e2e-clusters-g65cv/node-pool-l4m2w-test-additional-trust-bundle-propagation in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node collection invalid: expected 1 nodes, got 0