PR #6745 - 09-15 16:11

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

267
Total Tests
224
Passed
13
Failed
30
Skipped

Failed Tests

TestAzureScheduler
29m30.31s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-mq6px/azure-scheduler-vbxn6 in 2m48s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster azure-scheduler-vbxn6 util.go:2687: Successfully waited for HostedCluster e2e-clusters-mq6px/azure-scheduler-vbxn6 to have valid conditions in 50ms hypershift_framework.go:239: skipping postTeardown() hypershift_framework.go:220: skipping teardown, already called
TestAzureScheduler/ValidateHostedCluster
15m44.91s
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mq6px/azure-scheduler-vbxn6 in 2m45.05s util.go:276: Successfully waited for kubeconfig secret to have data in 50ms eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-azure-scheduler-vbxn6.aks-e2e.hypershift.azure.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": EOF util.go:330: Successfully waited for a successful connection to the guest API server in 14.35s util.go:513: Successfully waited for 2 nodes to become ready in 9m18.05s util.go:546: Successfully waited for HostedCluster e2e-clusters-mq6px/azure-scheduler-vbxn6 to rollout in 3m21.075s util.go:2687: Successfully waited for HostedCluster e2e-clusters-mq6px/azure-scheduler-vbxn6 to have valid conditions in 50ms
TestAzureScheduler/ValidateHostedCluster/EnsureNoCrashingPods
190ms
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mq6px/azure-scheduler-vbxn6 in 50ms util.go:276: Successfully waited for kubeconfig secret to have data in 25ms util.go:755: Container webhook in pod network-node-identity-7b5b6b47fd-pvhcw has a restartCount > 0 (1)
TestCreateCluster
39m35.5s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-bntsh/create-cluster-8d27n in 2m43s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster create-cluster-8d27n util.go:2687: Successfully waited for HostedCluster e2e-clusters-bntsh/create-cluster-8d27n to have valid conditions in 75ms hypershift_framework.go:239: skipping postTeardown() hypershift_framework.go:220: skipping teardown, already called
TestCreateCluster/Main
6m43.1s
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-bntsh/create-cluster-8d27n in 50ms util.go:276: Successfully waited for kubeconfig secret to have data in 25ms util.go:330: Successfully waited for a successful connection to the guest API server in 25ms create_cluster_test.go:1850: fetching mgmt kubeconfig util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-bntsh/create-cluster-8d27n in 50ms util.go:276: Successfully waited for kubeconfig secret to have data in 50ms
TestCreateCluster/Main/EnsureGlobalPullSecret
1m0.74s
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-bntsh/create-cluster-8d27n in 50ms util.go:276: Successfully waited for kubeconfig secret to have data in 25ms util.go:330: Successfully waited for a successful connection to the guest API server in 25ms util.go:1869: Timed out after 60.000s. should be able to pull other restricted images Expected success, but got an error: <*errors.errorString | 0xc0018ef9d0>: failed to get metadata for restricted image: failed to obtain root manifest for quay.io/hypershift/sleep:1.2.0: unauthorized: access to the requested resource is not authorized { s: "failed to get metadata for restricted image: failed to obtain root manifest for quay.io/hypershift/sleep:1.2.0: unauthorized: access to the requested resource is not authorized", }
TestCreateCluster/Main/EnsureGlobalPullSecret/Check_if_we_can_pull_other_restricted_images,_should_succeed
1m0s
testing.go:1679: test executed panic(nil) or runtime.Goexit: subtest may have called FailNow on a parent test
TestNodePool
0s
TestNodePool/HostedCluster0
1h6m23.07s
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-dthq2/node-pool-sbg2h in 2m41s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster node-pool-sbg2h util.go:2687: Failed to wait for HostedCluster e2e-clusters-dthq2/node-pool-sbg2h to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-dthq2/node-pool-sbg2h invalid at RV 61979 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=False, got ClusterVersionSucceeding=True: FromClusterVersion eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=False, got ClusterVersionAvailable=True: FromClusterVersion(Done applying 4.21.0-0.ci-2025-09-15-163201-test-ci-op-9zngsz7t-latest) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=True, got ClusterVersionProgressing=False: FromClusterVersion(Cluster version is 4.21.0-0.ci-2025-09-15-163201-test-ci-op-9zngsz7t-latest) hypershift_framework.go:239: skipping postTeardown() hypershift_framework.go:220: skipping teardown, already called
TestNodePool/HostedCluster0/Main
180ms
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-dthq2/node-pool-sbg2h in 75ms util.go:276: Successfully waited for kubeconfig secret to have data in 25ms util.go:330: Successfully waited for a successful connection to the guest API server in 50ms
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace
35m17.45s
nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest util.go:513: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-ntomachineconfig-inplace in 14m42.125s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-ntomachineconfig-inplace to have correct status in 25ms util.go:427: Successfully waited for NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-ntomachineconfig-inplace to start config update in 15.05s util.go:443: Successfully waited for NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-ntomachineconfig-inplace to finish config update in 5m20.025s nodepool_machineconfig_test.go:166: Failed to wait for all pods in the DaemonSet kube-system/node-pool-sbg2h-test-ntomachineconfig-inplace to be ready in 15m0s: context deadline exceeded eventually.go:383: observed invalid **v1.Pod state after 15m0s eventually.go:400: - observed **v1.Pod collection invalid: expected 2 Pods, got 1
TestNodePool/HostedCluster0/Main/TestNodePoolInPlaceUpgrade
39m28.96s
nodepool_upgrade_test.go:100: starting test NodePoolUpgradeTest util.go:513: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-inplaceupgrade in 19m24.075s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-inplaceupgrade to have correct status in 50ms nodepool_upgrade_test.go:160: Validating all Nodes have the synced labels and taints nodepool_upgrade_test.go:163: Successfully waited for NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-inplaceupgrade to have version 4.21.0-0.ci-2025-09-13-184910 in 25ms nodepool_upgrade_test.go:180: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-9zngsz7t/release@sha256:179be2c975efcc334cb9b584428fc4a9df5f00e6b45144359cb207e0e2c6a3b0 nodepool_upgrade_test.go:187: Successfully waited for NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-inplaceupgrade to start the upgrade in 3.05s eventually.go:104: Failed to get *v1beta1.NodePool: client rate limiter Wait returned an error: context deadline exceeded nodepool_upgrade_test.go:200: Failed to wait for NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-inplaceupgrade to have version 4.21.0-0.ci-2025-09-15-163201-test-ci-op-9zngsz7t-latest in 20m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-inplaceupgrade invalid at RV 71365 after 20m0s: wanted version 4.21.0-0.ci-2025-09-15-163201-test-ci-op-9zngsz7t-latest, got 4.21.0-0.ci-2025-09-13-184910 nodepool_upgrade_test.go:200: *v1beta1.NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-inplaceupgrade conditions: nodepool_upgrade_test.go:200: AutoscalingEnabled=False: AsExpected nodepool_upgrade_test.go:200: UpdateManagementEnabled=True: AsExpected nodepool_upgrade_test.go:200: ReconciliationActive=True: ReconciliationActive(Reconciliation active on resource) nodepool_upgrade_test.go:200: ValidMachineConfig=True: AsExpected nodepool_upgrade_test.go:200: UpdatingConfig=False: AsExpected nodepool_upgrade_test.go:200: UpdatingVersion=False: InplaceUpgradeFailed(Node node-pool-sbg2h-test-inplaceupgrade-g7tdm in nodepool degraded: disk validation failed: content mismatch for file "/var/lib/kubelet/config.json") nodepool_upgrade_test.go:200: ValidGeneratedPayload=True: AsExpected(Payload generated successfully) nodepool_upgrade_test.go:200: ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-9zngsz7t/release@sha256:179be2c975efcc334cb9b584428fc4a9df5f00e6b45144359cb207e0e2c6a3b0) nodepool_upgrade_test.go:200: ValidArchPlatform=True: AsExpected nodepool_upgrade_test.go:200: ReachedIgnitionEndpoint=True: AsExpected nodepool_upgrade_test.go:200: AllMachinesReady=True: AsExpected(All is well) nodepool_upgrade_test.go:200: AllNodesHealthy=True: AsExpected(All is well) nodepool_upgrade_test.go:200: ValidPlatformConfig=True: AsExpected(All is well) nodepool_upgrade_test.go:200: ValidTuningConfig=True: AsExpected nodepool_upgrade_test.go:200: UpdatingPlatformMachineTemplate=False: AsExpected nodepool_upgrade_test.go:200: AutorepairEnabled=False: AsExpected nodepool_upgrade_test.go:200: Ready=True: AsExpected
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout
40m3.88s
nodepool_machineconfig_test.go:55: Starting test NodePoolMachineconfigRolloutTest util.go:513: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-machineconfig in 14m45.15s nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-machineconfig to have correct status in 3.05s util.go:427: Successfully waited for NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-machineconfig to start config update in 15.05s util.go:443: Successfully waited for NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-machineconfig to finish config update in 15m0.05s nodepool_machineconfig_test.go:166: Successfully waited for all pods in the DaemonSet kube-system/machineconfig-update-checker-replace to be ready in 25ms util.go:513: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-machineconfig in 25ms eventually.go:104: Failed to get *v1beta1.NodePool: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline nodepool_test.go:354: Failed to wait for NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-machineconfig to have correct status in 10m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-dthq2/node-pool-sbg2h-test-machineconfig invalid at RV 73254 after 10m0s: incorrect condition: wanted ReachedIgnitionEndpoint=True, got ReachedIgnitionEndpoint=False: ignitionNotReached