PR #6745 - 09-15 07:46

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

402
Total Tests
369
Passed
9
Failed
24
Skipped

Failed Tests

TestCreateCluster
44m58.7s
create_cluster_test.go:1832: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-wrzmz/create-cluster-cbx65 in 2m30s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster create-cluster-cbx65 util.go:2687: Successfully waited for HostedCluster e2e-clusters-wrzmz/create-cluster-cbx65 to have valid conditions in 0s hypershift_framework.go:239: skipping postTeardown() hypershift_framework.go:220: skipping teardown, already called
TestCreateCluster/Main
5m56.39s
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-wrzmz/create-cluster-cbx65 in 0s util.go:276: Successfully waited for kubeconfig secret to have data in 0s util.go:330: Successfully waited for a successful connection to the guest API server in 0s create_cluster_test.go:1850: fetching mgmt kubeconfig util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-wrzmz/create-cluster-cbx65 in 0s util.go:276: Successfully waited for kubeconfig secret to have data in 0s
TestCreateCluster/Main/EnsureGlobalPullSecret
35.47s
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-wrzmz/create-cluster-cbx65 in 0s util.go:276: Successfully waited for kubeconfig secret to have data in 0s util.go:330: Successfully waited for a successful connection to the guest API server in 25ms util.go:1853: Timed out after 30.001s. global-pull-secret secret is not updated Expected success, but got an error: <*errors.errorString | 0xc001289ab0>: global-pull-secret secret is equal to the old global-pull-secret secret, should be different { s: "global-pull-secret secret is equal to the old global-pull-secret secret, should be different", }
TestCreateCluster/Main/EnsureGlobalPullSecret/Check_if_GlobalPullSecret_secret_is_updated_in_the_DataPlane
30s
testing.go:1679: test executed panic(nil) or runtime.Goexit: subtest may have called FailNow on a parent test
TestNodePool
0s
TestNodePool/HostedCluster0
47m39.75s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-2lq59/node-pool-8qkgt in 24s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster node-pool-8qkgt util.go:2687: Failed to wait for HostedCluster e2e-clusters-2lq59/node-pool-8qkgt to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-2lq59/node-pool-8qkgt invalid at RV 93191 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=False, got ClusterVersionSucceeding=True: FromClusterVersion eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=True, got ClusterVersionProgressing=False: FromClusterVersion(Cluster version is 4.21.0-0.ci-2025-09-15-080955-test-ci-op-wndd2i9m-latest) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=False, got ClusterVersionAvailable=True: FromClusterVersion(Done applying 4.21.0-0.ci-2025-09-15-080955-test-ci-op-wndd2i9m-latest) hypershift_framework.go:239: skipping postTeardown() hypershift_framework.go:220: skipping teardown, already called
TestNodePool/HostedCluster0/Main
20ms
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-2lq59/node-pool-8qkgt in 0s util.go:276: Successfully waited for kubeconfig secret to have data in 0s util.go:330: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace
33m50.05s
nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest util.go:513: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-2lq59/node-pool-8qkgt-test-ntomachineconfig-inplace in 10m45s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-2lq59/node-pool-8qkgt-test-ntomachineconfig-inplace to have correct status in 0s util.go:427: Successfully waited for NodePool e2e-clusters-2lq59/node-pool-8qkgt-test-ntomachineconfig-inplace to start config update in 45s util.go:443: Successfully waited for NodePool e2e-clusters-2lq59/node-pool-8qkgt-test-ntomachineconfig-inplace to finish config update in 7m20s eventually.go:258: Failed to get **v1.Pod: context deadline exceeded nodepool_machineconfig_test.go:166: Failed to wait for all pods in the DaemonSet kube-system/node-pool-8qkgt-test-ntomachineconfig-inplace to be ready in 15m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Pod state after 15m0s eventually.go:400: - observed **v1.Pod collection invalid: expected 2 Pods, got 1
TestNodePool/HostedCluster0/Main/TestNodePoolInPlaceUpgrade
31m8.88s
nodepool_upgrade_test.go:100: starting test NodePoolUpgradeTest util.go:513: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-2lq59/node-pool-8qkgt-test-inplaceupgrade in 10m48s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-2lq59/node-pool-8qkgt-test-inplaceupgrade to have correct status in 0s nodepool_upgrade_test.go:160: Validating all Nodes have the synced labels and taints nodepool_upgrade_test.go:163: Successfully waited for NodePool e2e-clusters-2lq59/node-pool-8qkgt-test-inplaceupgrade to have version 4.21.0-0.ci-2025-09-13-184910 in 0s nodepool_upgrade_test.go:180: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-wndd2i9m/release@sha256:40b60c3ada76948728c7cb64dadd95d9ca4dd380d4804e73468e6e98abdb19d7 nodepool_upgrade_test.go:187: Successfully waited for NodePool e2e-clusters-2lq59/node-pool-8qkgt-test-inplaceupgrade to start the upgrade in 3s nodepool_upgrade_test.go:200: Failed to wait for NodePool e2e-clusters-2lq59/node-pool-8qkgt-test-inplaceupgrade to have version 4.21.0-0.ci-2025-09-15-080955-test-ci-op-wndd2i9m-latest in 20m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-2lq59/node-pool-8qkgt-test-inplaceupgrade invalid at RV 69965 after 20m0s: wanted version 4.21.0-0.ci-2025-09-15-080955-test-ci-op-wndd2i9m-latest, got 4.21.0-0.ci-2025-09-13-184910 nodepool_upgrade_test.go:200: *v1beta1.NodePool e2e-clusters-2lq59/node-pool-8qkgt-test-inplaceupgrade conditions: nodepool_upgrade_test.go:200: ValidPlatformConfig=True: AsExpected(All is well) nodepool_upgrade_test.go:200: AutoscalingEnabled=False: AsExpected nodepool_upgrade_test.go:200: UpdateManagementEnabled=True: AsExpected nodepool_upgrade_test.go:200: ValidArchPlatform=True: AsExpected nodepool_upgrade_test.go:200: ReconciliationActive=True: ReconciliationActive(Reconciliation active on resource) nodepool_upgrade_test.go:200: UpdatingVersion=False: InplaceUpgradeFailed(Node ip-10-0-3-249.ec2.internal in nodepool degraded: disk validation failed: content mismatch for file "/var/lib/kubelet/config.json") nodepool_upgrade_test.go:200: ValidGeneratedPayload=True: AsExpected(Payload generated successfully) nodepool_upgrade_test.go:200: ReachedIgnitionEndpoint=True: AsExpected nodepool_upgrade_test.go:200: ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-wndd2i9m/release@sha256:40b60c3ada76948728c7cb64dadd95d9ca4dd380d4804e73468e6e98abdb19d7) nodepool_upgrade_test.go:200: ValidMachineConfig=True: AsExpected nodepool_upgrade_test.go:200: UpdatingConfig=False: InplaceUpgradeFailed(Node ip-10-0-3-249.ec2.internal in nodepool degraded: disk validation failed: content mismatch for file "/var/lib/kubelet/config.json") nodepool_upgrade_test.go:200: AllMachinesReady=True: AsExpected(All is well) nodepool_upgrade_test.go:200: AllNodesHealthy=True: AsExpected(All is well) nodepool_upgrade_test.go:200: ValidPlatformImage=True: AsExpected(Bootstrap AMI is "ami-09d23adad19cdb25c") nodepool_upgrade_test.go:200: AWSSecurityGroupAvailable=True: AsExpected(NodePool has a default security group) nodepool_upgrade_test.go:200: ValidTuningConfig=True: AsExpected nodepool_upgrade_test.go:200: UpdatingPlatformMachineTemplate=False: AsExpected nodepool_upgrade_test.go:200: AutorepairEnabled=False: AsExpected nodepool_upgrade_test.go:200: Ready=True: AsExpected