PR #7469 - 01-13 16:33

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

427
Total Tests
389
Passed
7
Failed
31
Skipped

Failed Tests

TestCreateCluster
26m44.29s
create_cluster_test.go:2492: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-4mrbn/create-cluster-69tdn in 2m10s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster create-cluster-69tdn util.go:2951: Successfully waited for HostedCluster e2e-clusters-4mrbn/create-cluster-69tdn to have valid conditions in 0s hypershift_framework.go:249: skipping postTeardown() hypershift_framework.go:230: skipping teardown, already called
TestCreateCluster/ValidateHostedCluster
16m56.45s
util.go:286: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-4mrbn/create-cluster-69tdn in 3m45s util.go:303: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-69tdn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.242.135:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-69tdn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.226.255.21:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-69tdn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.242.135:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-69tdn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.226.255.21:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-69tdn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.22.54.65:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-69tdn.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.218.242.135:443: connect: connection refused util.go:365: Successfully waited for a successful connection to the guest API server in 36.025s util.go:567: Successfully waited for 3 nodes to become ready in 9m3s util.go:600: Successfully waited for HostedCluster e2e-clusters-4mrbn/create-cluster-69tdn to rollout in 3m27s util.go:2951: Successfully waited for HostedCluster e2e-clusters-4mrbn/create-cluster-69tdn to have valid conditions in 0s
TestCreateCluster/ValidateHostedCluster/EnsureNoCrashingPods
30ms
util.go:286: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-4mrbn/create-cluster-69tdn in 0s util.go:303: Successfully waited for kubeconfig secret to have data in 0s util.go:784: Container manager in pod capi-provider-55964f5994-qhf6m has a restartCount > 0 (1)
TestNodePool
0s
TestNodePool/HostedCluster0
1h15m53.66s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-mkzqw/node-pool-lf7cq in 1m24s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster node-pool-lf7cq util.go:2951: Failed to wait for HostedCluster e2e-clusters-mkzqw/node-pool-lf7cq to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-mkzqw/node-pool-lf7cq invalid at RV 108454 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=False, got ClusterVersionSucceeding=True: FromClusterVersion eventually.go:227: - incorrect condition: wanted DataPlaneConnectionAvailable=Unknown, got DataPlaneConnectionAvailable=True: AsExpected(All is well) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=True, got ClusterVersionProgressing=False: FromClusterVersion(Cluster version is 4.22.0-0.ci-2026-01-13-164308-test-ci-op-nc8pb738-latest) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=False, got ClusterVersionAvailable=True: FromClusterVersion(Done applying 4.22.0-0.ci-2026-01-13-164308-test-ci-op-nc8pb738-latest) hypershift_framework.go:249: skipping postTeardown() hypershift_framework.go:230: skipping teardown, already called
TestNodePool/HostedCluster0/Main
10ms
util.go:286: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mkzqw/node-pool-lf7cq in 0s util.go:303: Successfully waited for kubeconfig secret to have data in 0s util.go:365: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster0/Main/TestNodePoolAutoRepair
30m0.01s
eventually.go:258: Failed to get **v1.Node: context deadline exceeded util.go:567: Failed to wait for 1 nodes to become ready for NodePool e2e-clusters-mkzqw/node-pool-lf7cq-test-autorepair in 30m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Node state after 30m0s eventually.go:400: - observed **v1.Node /ip-10-0-14-231.ec2.internal invalid: incorrect condition: wanted Ready=True, got Ready=False: KubeletNotReady(container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) util.go:567: *v1.Node /ip-10-0-14-231.ec2.internal conditions: util.go:567: MemoryPressure=False: KubeletHasSufficientMemory(kubelet has sufficient memory available) util.go:567: DiskPressure=False: KubeletHasNoDiskPressure(kubelet has no disk pressure) util.go:567: PIDPressure=False: KubeletHasSufficientPID(kubelet has sufficient PID available) util.go:567: Ready=False: KubeletNotReady(container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)