Failed Tests
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-s5mnq/create-cluster-lh7dx in 2m25s
hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster create-cluster-lh7dx
util.go:2687: Successfully waited for HostedCluster e2e-clusters-s5mnq/create-cluster-lh7dx to have valid conditions in 50ms
hypershift_framework.go:239: skipping postTeardown()
hypershift_framework.go:220: skipping teardown, already called
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-s5mnq/create-cluster-lh7dx in 75ms
util.go:276: Successfully waited for kubeconfig secret to have data in 25ms
util.go:330: Successfully waited for a successful connection to the guest API server in 25ms
create_cluster_test.go:1850: fetching mgmt kubeconfig
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-s5mnq/create-cluster-lh7dx in 50ms
util.go:276: Successfully waited for kubeconfig secret to have data in 25ms
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-s5mnq/create-cluster-lh7dx in 50ms
util.go:276: Successfully waited for kubeconfig secret to have data in 25ms
util.go:330: Successfully waited for a successful connection to the guest API server in 25ms
util.go:1853:
Timed out after 30.001s.
global-pull-secret secret is not updated
Expected success, but got an error:
<*errors.errorString | 0xc00330c4c0>:
global-pull-secret secret is equal to the old global-pull-secret secret, should be different
{
s: "global-pull-secret secret is equal to the old global-pull-secret secret, should be different",
}
testing.go:1679: test executed panic(nil) or runtime.Goexit: subtest may have called FailNow on a parent test
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-j8wbn/node-pool-npvsp in 2m36s
hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster node-pool-npvsp
util.go:2687: Failed to wait for HostedCluster e2e-clusters-j8wbn/node-pool-npvsp to have valid conditions in 2s: context deadline exceeded
eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-j8wbn/node-pool-npvsp invalid at RV 65412 after 2s:
eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=False, got ClusterVersionAvailable=True: FromClusterVersion(Done applying 4.21.0-0.ci-2025-09-15-080955-test-ci-op-wndd2i9m-latest)
eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=False, got ClusterVersionSucceeding=True: FromClusterVersion
eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=True, got ClusterVersionProgressing=False: FromClusterVersion(Cluster version is 4.21.0-0.ci-2025-09-15-080955-test-ci-op-wndd2i9m-latest)
hypershift_framework.go:239: skipping postTeardown()
hypershift_framework.go:220: skipping teardown, already called
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-j8wbn/node-pool-npvsp in 50ms
util.go:276: Successfully waited for kubeconfig secret to have data in 25ms
util.go:330: Successfully waited for a successful connection to the guest API server in 125ms
nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest
util.go:513: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-j8wbn/node-pool-npvsp-test-ntomachineconfig-inplace in 11m21.075s
nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-j8wbn/node-pool-npvsp-test-ntomachineconfig-inplace to have correct status in 25ms
util.go:427: Successfully waited for NodePool e2e-clusters-j8wbn/node-pool-npvsp-test-ntomachineconfig-inplace to start config update in 15.025s
util.go:443: Successfully waited for NodePool e2e-clusters-j8wbn/node-pool-npvsp-test-ntomachineconfig-inplace to finish config update in 12m40.025s
nodepool_machineconfig_test.go:166: Failed to wait for all pods in the DaemonSet kube-system/node-pool-npvsp-test-ntomachineconfig-inplace to be ready in 15m0s: context deadline exceeded
eventually.go:383: observed invalid **v1.Pod state after 15m0s
eventually.go:400: - observed **v1.Pod collection invalid: expected 2 Pods, got 1
nodepool_upgrade_test.go:100: starting test NodePoolUpgradeTest
util.go:513: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-j8wbn/node-pool-npvsp-test-inplaceupgrade in 11m12.075s
nodepool_test.go:354: Successfully waited for NodePool e2e-clusters-j8wbn/node-pool-npvsp-test-inplaceupgrade to have correct status in 25ms
nodepool_upgrade_test.go:160: Validating all Nodes have the synced labels and taints
nodepool_upgrade_test.go:163: Successfully waited for NodePool e2e-clusters-j8wbn/node-pool-npvsp-test-inplaceupgrade to have version 4.21.0-0.ci-2025-09-13-184910 in 25ms
nodepool_upgrade_test.go:180: Updating NodePool image. Image: registry.build01.ci.openshift.org/ci-op-wndd2i9m/release@sha256:40b60c3ada76948728c7cb64dadd95d9ca4dd380d4804e73468e6e98abdb19d7
nodepool_upgrade_test.go:187: Successfully waited for NodePool e2e-clusters-j8wbn/node-pool-npvsp-test-inplaceupgrade to start the upgrade in 3.025s
nodepool_upgrade_test.go:200: Failed to wait for NodePool e2e-clusters-j8wbn/node-pool-npvsp-test-inplaceupgrade to have version 4.21.0-0.ci-2025-09-15-080955-test-ci-op-wndd2i9m-latest in 20m0s: context deadline exceeded
eventually.go:224: observed *v1beta1.NodePool e2e-clusters-j8wbn/node-pool-npvsp-test-inplaceupgrade invalid at RV 62096 after 20m0s: wanted version 4.21.0-0.ci-2025-09-15-080955-test-ci-op-wndd2i9m-latest, got 4.21.0-0.ci-2025-09-13-184910
nodepool_upgrade_test.go:200: *v1beta1.NodePool e2e-clusters-j8wbn/node-pool-npvsp-test-inplaceupgrade conditions:
nodepool_upgrade_test.go:200: ReachedIgnitionEndpoint=True: AsExpected
nodepool_upgrade_test.go:200: UpdateManagementEnabled=True: AsExpected
nodepool_upgrade_test.go:200: ValidMachineConfig=True: AsExpected
nodepool_upgrade_test.go:200: ValidGeneratedPayload=True: AsExpected(Payload generated successfully)
nodepool_upgrade_test.go:200: AllMachinesReady=True: AsExpected(All is well)
nodepool_upgrade_test.go:200: AllNodesHealthy=True: AsExpected(All is well)
nodepool_upgrade_test.go:200: ValidPlatformConfig=True: AsExpected(All is well)
nodepool_upgrade_test.go:200: AutoscalingEnabled=False: AsExpected
nodepool_upgrade_test.go:200: ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-wndd2i9m/release@sha256:40b60c3ada76948728c7cb64dadd95d9ca4dd380d4804e73468e6e98abdb19d7)
nodepool_upgrade_test.go:200: ValidArchPlatform=True: AsExpected
nodepool_upgrade_test.go:200: ReconciliationActive=True: ReconciliationActive(Reconciliation active on resource)
nodepool_upgrade_test.go:200: UpdatingConfig=False: AsExpected
nodepool_upgrade_test.go:200: UpdatingVersion=False: InplaceUpgradeFailed(Node node-pool-npvsp-test-inplaceupgrade-lmk4b in nodepool degraded: disk validation failed: content mismatch for file "/var/lib/kubelet/config.json")
nodepool_upgrade_test.go:200: ValidTuningConfig=True: AsExpected
nodepool_upgrade_test.go:200: UpdatingPlatformMachineTemplate=False: AsExpected
nodepool_upgrade_test.go:200: AutorepairEnabled=False: AsExpected
nodepool_upgrade_test.go:200: Ready=True: AsExpected
control_plane_upgrade_test.go:27: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-wndd2i9m/release@sha256:d465e7f7f58cc95ade3ccf9e146c636eb0d7e0ebb3fab1ff67f43f6e346c8c97, toImage: registry.build01.ci.openshift.org/ci-op-wndd2i9m/release@sha256:40b60c3ada76948728c7cb64dadd95d9ca4dd380d4804e73468e6e98abdb19d7
hypershift_framework.go:396: Successfully created hostedcluster e2e-clusters-fdmw2/control-plane-upgrade-v5hcg in 2m25s
hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster control-plane-upgrade-v5hcg
util.go:2687: Successfully waited for HostedCluster e2e-clusters-fdmw2/control-plane-upgrade-v5hcg to have valid conditions in 50ms
hypershift_framework.go:239: skipping postTeardown()
hypershift_framework.go:220: skipping teardown, already called
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-fdmw2/control-plane-upgrade-v5hcg in 50ms
util.go:276: Successfully waited for kubeconfig secret to have data in 25ms
util.go:330: Successfully waited for a successful connection to the guest API server in 25ms
control_plane_upgrade_test.go:49: Updating cluster image. Image: registry.build01.ci.openshift.org/ci-op-wndd2i9m/release@sha256:40b60c3ada76948728c7cb64dadd95d9ca4dd380d4804e73468e6e98abdb19d7
util.go:546: Successfully waited for HostedCluster e2e-clusters-fdmw2/control-plane-upgrade-v5hcg to rollout in 45.075s
util.go:259: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-fdmw2/control-plane-upgrade-v5hcg in 50ms
util.go:276: Successfully waited for kubeconfig secret to have data in 25ms
util.go:755: Container cluster-network-operator in pod cluster-network-operator-db5b57485-q6gcz has a restartCount > 0 (1)