PR #7460 - 02-26 03:08

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

457
Total Tests
427
Passed
8
Failed
22
Skipped

Failed Tests

TestCreateCluster
43m1.88s
create_cluster_test.go:2422: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-kl9pd/create-cluster-6mtxm in 33s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster create-cluster-6mtxm util.go:2974: Failed to wait for HostedCluster e2e-clusters-kl9pd/create-cluster-6mtxm to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-kl9pd/create-cluster-6mtxm invalid at RV 110571 after 2s: incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorNotAvailable(Cluster operator storage is not available) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestCreateCluster/Main
18m30.08s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-kl9pd/create-cluster-6mtxm in 0s util.go:308: Successfully waited for kubeconfig secret to have data in 0s util.go:370: Successfully waited for a successful connection to the guest API server in 0s create_cluster_test.go:2464: fetching mgmt kubeconfig util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-kl9pd/create-cluster-6mtxm in 0s util.go:308: Successfully waited for kubeconfig secret to have data in 0s aws_ccm.go:50: Testing AWS CCM with customizations on platform AWS aws_ccm.go:51: Feature gate enabled: true aws_ccm.go:156: Successfully waited for LoadBalancer service to have ingress hostname in 6s
TestCreateCluster/Main/When_AWSServiceLBNetworkSecurityGroup_is_enabled_it_must_create_a_LoadBalancer_NLB_with_managed_security_group_attached
2m44.41s
aws_ccm.go:118: Creating test namespace test-ccm-nlb-sg in guest cluster aws_ccm.go:150: Creating LoadBalancer service test-ccm-nlb-sg/test-ccm-nlb-sg-svc aws_ccm.go:179: LoadBalancer provisioned with hostname: a0a8a53977d034c239ee834e7eae2562-abc5ecdb266805af.elb.us-east-1.amazonaws.com aws_ccm.go:183: Extracted load balancer name: a0a8a53977d034c239ee834e7eae2562 aws_ccm.go:185: Verifying load balancer has security groups using AWS SDK aws_ccm.go:201: Waiting for load balancer "a0a8a53977d034c239ee834e7eae2562" to become available (up to ~3 minutes) aws_ccm.go:226: Describing load balancer to check for security groups aws_ccm.go:233: Load balancer ARN: arn:aws:elasticloadbalancing:us-east-1:820196288204:loadbalancer/net/a0a8a53977d034c239ee834e7eae2562/abc5ecdb266805af aws_ccm.go:234: Load balancer Type: network aws_ccm.go:235: Load balancer Security Groups: [] aws_ccm.go:238: load balancer should have security groups attached when NLBSecurityGroupMode = Managed Expected <int>: 0 to be > <int>: 0 aws_ccm.go:122: Cleaning up test namespace test-ccm-nlb-sg
TestCreateCluster/Main/When_NLBSecurityGroupMode_is_enabled_it_must_have_config_NLBSecurityGroupMode=Managed_entry_in_cloud-config_configmap
0s
aws_ccm.go:67: Validating aws-cloud-config ConfigMap contains NLBSecurityGroupMode = Managed aws_ccm.go:85: verifying NLBSecurityGroupMode is present in cloud config aws_ccm.go:86: NLBSecurityGroupMode must be present in cloud-config when feature gate is enabled Expected <string>: [Global] Zone = us-east-1a VPC = vpc-0e7e4241dfa9beb93 KubernetesClusterID = create-cluster-6mtxm SubnetID = subnet-0088acf1da5180dda ClusterServiceLoadBalancerHealthProbeMode = Shared to contain substring <string>: NLBSecurityGroupMode
TestNodePool
10ms
TestNodePool/HostedCluster0
1h9m55.87s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-s2ngn/node-pool-jlbqn in 33s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster node-pool-jlbqn util.go:2974: Failed to wait for HostedCluster e2e-clusters-s2ngn/node-pool-jlbqn to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-s2ngn/node-pool-jlbqn invalid at RV 117313 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=False, got ClusterVersionAvailable=True: FromClusterVersion(Done applying 4.22.0-0.ci-2026-02-26-032316-test-ci-op-1lptx8vl-latest) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=False, got ClusterVersionSucceeding=True: FromClusterVersion eventually.go:227: - incorrect condition: wanted DataPlaneConnectionAvailable=Unknown, got DataPlaneConnectionAvailable=True: AsExpected(All is well) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=True, got ClusterVersionProgressing=False: FromClusterVersion(Cluster version is 4.22.0-0.ci-2026-02-26-032316-test-ci-op-1lptx8vl-latest) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestNodePool/HostedCluster0/Main
80ms
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-s2ngn/node-pool-jlbqn in 0s util.go:308: Successfully waited for kubeconfig secret to have data in 0s util.go:370: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster0/Main/TestSpotTerminationHandler
26m54.02s
util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-s2ngn/node-pool-jlbqn-test-spot-termination in 16m54s nodepool_test.go:404: Failed to wait for NodePool e2e-clusters-s2ngn/node-pool-jlbqn-test-spot-termination to have correct status in 10m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-s2ngn/node-pool-jlbqn-test-spot-termination invalid at RV 98988 after 10m0s: eventually.go:227: - incorrect condition: wanted AllNodesHealthy=True, got AllNodesHealthy=False: NodeConditionsFailed(Machine node-pool-jlbqn-test-spot-termination-6d6wq-lrqmd: NodeConditionsFailed ) eventually.go:227: - incorrect condition: wanted Ready=True, got Ready=False: WaitingForAvailableMachines(Minimum availability requires 1 replicas, current 0 available)