PR #7590 - 02-27 15:12

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

437
Total Tests
405
Passed
10
Failed
22
Skipped

Failed Tests

TestCreateClusterPrivateWithRouteKAS
50m31.14s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-qkkdp/private-f4fph in 13s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster private-f4fph util.go:2974: Failed to wait for HostedCluster e2e-clusters-qkkdp/private-f4fph to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-qkkdp/private-f4fph invalid at RV 521126 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorNotAvailable(Cluster operator console is not available) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted Degraded=False, got Degraded=True: UnavailableReplicas(hosted-cluster-config-operator deployment has 1 unavailable replicas) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorNotAvailable(Unable to apply 4.22.0-0.ci-2026-02-27-153137-test-ci-op-6jp370vd-latest: the cluster operator console is not available) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestCreateClusterPrivateWithRouteKAS/ValidateHostedCluster
43m42.02s
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-qkkdp/private-f4fph in 2m3s util.go:308: Successfully waited for kubeconfig secret to have data in 0s util.go:695: Successfully waited for NodePools for HostedCluster e2e-clusters-qkkdp/private-f4fph to have all of their desired nodes in 11m39s eventually.go:104: Failed to get *v1beta1.HostedCluster: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline util.go:598: Failed to wait for HostedCluster e2e-clusters-qkkdp/private-f4fph to rollout in 30m0s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-qkkdp/private-f4fph invalid at RV 518666 after 30m0s: wanted most recent version history to have state Completed, has state Partial util.go:598: *v1beta1.HostedCluster e2e-clusters-qkkdp/private-f4fph conditions: util.go:598: ValidAWSIdentityProvider=True: AsExpected(All is well) util.go:598: ClusterVersionSucceeding=False: ClusterOperatorNotAvailable(Cluster operator console is not available) util.go:598: ClusterVersionProgressing=True: ClusterOperatorNotAvailable(Unable to apply 4.22.0-0.ci-2026-02-27-153137-test-ci-op-6jp370vd-latest: the cluster operator console is not available) util.go:598: ClusterVersionReleaseAccepted=True: PayloadLoaded(Payload loaded version="4.22.0-0.ci-2026-02-27-153137-test-ci-op-6jp370vd-latest" image="registry.build01.ci.openshift.org/ci-op-6jp370vd/release@sha256:86033270e5e605f7eb9e193bb41f81ca8e1572286490d67d70892f04033c0193" architecture="amd64") util.go:598: ClusterVersionUpgradeable=False: UpdateInProgress(An update is already in progress and the details are in the Progressing condition) util.go:598: ClusterVersionAvailable=False: FromClusterVersion util.go:598: Degraded=True: UnavailableReplicas([hosted-cluster-config-operator deployment has 1 unavailable replicas, kube-controller-manager deployment has 1 unavailable replicas]) util.go:598: EtcdAvailable=True: QuorumAvailable util.go:598: KubeAPIServerAvailable=True: AsExpected(Kube APIServer deployment is available) util.go:598: InfrastructureReady=True: AsExpected(All is well) util.go:598: ExternalDNSReachable=Unknown: StatusUnknown(External DNS is not configured) util.go:598: ValidHostedControlPlaneConfiguration=True: AsExpected(Configuration passes validation) util.go:598: ValidReleaseInfo=True: AsExpected(All is well) util.go:598: ValidIDPConfiguration=True: IDPConfigurationValid(Identity provider configuration is valid) util.go:598: HostedClusterRestoredFromBackup=Unknown: StatusUnknown(Condition not found in the HCP) util.go:598: DataPlaneConnectionAvailable=True: AsExpected(All is well) util.go:598: Available=True: AsExpected(The hosted control plane is available) util.go:598: AWSEndpointAvailable=True: AWSSuccess(All is well) util.go:598: AWSEndpointServiceAvailable=True: AWSSuccess(All is well) util.go:598: ValidConfiguration=True: AsExpected(Configuration passes validation) util.go:598: SupportedHostedCluster=True: AsExpected(HostedCluster is supported by operator configuration) util.go:598: ValidProxyConfiguration=True: AsExpected(No proxy CA bundle configured) util.go:598: IgnitionEndpointAvailable=True: AsExpected(Ignition server deployment is available) util.go:598: ReconciliationActive=True: AsExpected(Reconciliation active on resource) util.go:598: ValidReleaseImage=True: AsExpected(Release image is valid) util.go:598: Progressing=False: AsExpected(HostedCluster is at expected version) util.go:598: PlatformCredentialsFound=True: AsExpected(Required platform credentials are found) util.go:598: ValidOIDCConfiguration=True: AsExpected(OIDC configuration is valid) util.go:598: ReconciliationSucceeded=True: ReconciliatonSucceeded(Reconciliation completed successfully) util.go:598: ValidAWSKMSConfig=Unknown: StatusUnknown(AWS KMS is not configured) util.go:598: ClusterVersionRetrievedUpdates=False: NoChannel(The update channel has not been configured.) util.go:598: AWSDefaultSecurityGroupCreated=True: AsExpected(All is well)
TestNodePool
0s
TestNodePool/HostedCluster0
1h6m38.69s
hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-5ncmd/node-pool-chjcq in 9s hypershift_framework.go:128: Summarizing unexpected conditions for HostedCluster node-pool-chjcq util.go:2974: Failed to wait for HostedCluster e2e-clusters-5ncmd/node-pool-chjcq to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-5ncmd/node-pool-chjcq invalid at RV 374581 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=False, got ClusterVersionSucceeding=True: FromClusterVersion eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=False, got ClusterVersionAvailable=True: FromClusterVersion(Done applying 4.22.0-0.ci-2026-02-27-153137-test-ci-op-6jp370vd-latest) eventually.go:227: - incorrect condition: wanted DataPlaneConnectionAvailable=Unknown, got DataPlaneConnectionAvailable=True: AsExpected(All is well) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=True, got ClusterVersionProgressing=False: FromClusterVersion(Cluster version is 4.22.0-0.ci-2026-02-27-153137-test-ci-op-6jp370vd-latest) hypershift_framework.go:278: skipping postTeardown() hypershift_framework.go:256: skipping teardown, already called
TestNodePool/HostedCluster0/Main
30ms
util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-5ncmd/node-pool-chjcq in 0s util.go:308: Successfully waited for kubeconfig secret to have data in 0s util.go:370: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut
15m54.08s
nodepool_nto_machineconfig_test.go:67: Starting test NTOMachineConfigRolloutTest util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-5ncmd/node-pool-chjcq-test-ntomachineconfig-replace in 10m48s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-5ncmd/node-pool-chjcq-test-ntomachineconfig-replace to have correct status in 6s util.go:481: Failed to wait for NodePool e2e-clusters-5ncmd/node-pool-chjcq-test-ntomachineconfig-replace to start config update in 5m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-5ncmd/node-pool-chjcq-test-ntomachineconfig-replace invalid at RV 377720 after 5m0s: incorrect condition: wanted UpdatingConfig=True, got UpdatingConfig=False: AsExpected util.go:481: *v1beta1.NodePool e2e-clusters-5ncmd/node-pool-chjcq-test-ntomachineconfig-replace conditions: util.go:481: AutoscalingEnabled=False: AsExpected util.go:481: UpdateManagementEnabled=True: AsExpected util.go:481: ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-6jp370vd/release@sha256:86033270e5e605f7eb9e193bb41f81ca8e1572286490d67d70892f04033c0193) util.go:481: ValidArchPlatform=True: AsExpected util.go:481: ReconciliationActive=True: ReconciliationActive(Reconciliation active on resource) util.go:481: SupportedVersionSkew=True: AsExpected(Release image version is valid) util.go:481: ValidMachineConfig=True: AsExpected util.go:481: UpdatingConfig=False: AsExpected util.go:481: UpdatingVersion=False: AsExpected util.go:481: ValidGeneratedPayload=True: AsExpected(Payload generated successfully) util.go:481: ReachedIgnitionEndpoint=True: AsExpected util.go:481: AllMachinesReady=True: AsExpected(All is well) util.go:481: AllNodesHealthy=False: NodeConditionsFailed(Machine node-pool-chjcq-test-ntomachineconfig-replace-hbrpw-2g66g: NodeConditionsFailed Machine node-pool-chjcq-test-ntomachineconfig-replace-hbrpw-2vbg9: NodeConditionsFailed ) util.go:481: ValidPlatformConfig=True: AsExpected(All is well) util.go:481: ValidPlatformImage=True: AsExpected(Bootstrap AMI is "ami-01095d1967818437c") util.go:481: AWSSecurityGroupAvailable=True: AsExpected(NodePool has a default security group) util.go:481: ValidTuningConfig=True: AsExpected util.go:481: UpdatingPlatformMachineTemplate=False: AsExpected util.go:481: AutorepairEnabled=False: AsExpected util.go:481: Ready=True: AsExpected
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout
12m57.05s
nodepool_machineconfig_test.go:54: Starting test NodePoolMachineconfigRolloutTest util.go:565: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-5ncmd/node-pool-chjcq-test-machineconfig in 7m51s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-5ncmd/node-pool-chjcq-test-machineconfig to have correct status in 6s util.go:481: Failed to wait for NodePool e2e-clusters-5ncmd/node-pool-chjcq-test-machineconfig to start config update in 5m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-5ncmd/node-pool-chjcq-test-machineconfig invalid at RV 412733 after 5m0s: incorrect condition: wanted UpdatingConfig=True, got UpdatingConfig=False: AsExpected util.go:481: *v1beta1.NodePool e2e-clusters-5ncmd/node-pool-chjcq-test-machineconfig conditions: util.go:481: AutoscalingEnabled=False: AsExpected util.go:481: UpdateManagementEnabled=True: AsExpected util.go:481: ValidReleaseImage=True: AsExpected(Using release image: registry.build01.ci.openshift.org/ci-op-6jp370vd/release@sha256:86033270e5e605f7eb9e193bb41f81ca8e1572286490d67d70892f04033c0193) util.go:481: ValidArchPlatform=True: AsExpected util.go:481: ReconciliationActive=True: ReconciliationActive(Reconciliation active on resource) util.go:481: SupportedVersionSkew=True: AsExpected(Release image version is valid) util.go:481: ValidMachineConfig=True: AsExpected util.go:481: UpdatingConfig=False: AsExpected util.go:481: UpdatingVersion=False: AsExpected util.go:481: ValidGeneratedPayload=True: AsExpected(Payload generated successfully) util.go:481: ReachedIgnitionEndpoint=True: AsExpected util.go:481: AllMachinesReady=True: AsExpected(All is well) util.go:481: AllNodesHealthy=False: NodeConditionsFailed(Machine node-pool-chjcq-test-machineconfig-zq4lk-lnrb6: NodeConditionsFailed ) util.go:481: ValidPlatformConfig=True: AsExpected(All is well) util.go:481: ValidPlatformImage=True: AsExpected(Bootstrap AMI is "ami-01095d1967818437c") util.go:481: AWSSecurityGroupAvailable=True: AsExpected(NodePool has a default security group) util.go:481: ValidTuningConfig=True: AsExpected util.go:481: UpdatingPlatformMachineTemplate=False: AsExpected util.go:481: AutorepairEnabled=False: AsExpected util.go:481: Ready=True: AsExpected
TestNodePool/HostedCluster0/Main/TestRollingUpgrade
18m12.07s
util.go:565: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-5ncmd/node-pool-chjcq-test-rolling-upgrade in 11m15s nodepool_test.go:404: Successfully waited for NodePool e2e-clusters-5ncmd/node-pool-chjcq-test-rolling-upgrade to have correct status in 9s nodepool_rolling_upgrade_test.go:106: Successfully waited for NodePool e2e-clusters-5ncmd/node-pool-chjcq-test-rolling-upgrade to start the rolling upgrade in 3s nodepool_rolling_upgrade_test.go:120: Successfully waited for NodePool e2e-clusters-5ncmd/node-pool-chjcq-test-rolling-upgrade to finish the rolling upgrade in 6m45s nodepool_rolling_upgrade_test.go:143: Expected <string>: m5.large to equal <string>: m5.xlarge
TestUpgradeControlPlane
1h26m55.07s
control_plane_upgrade_test.go:25: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-6jp370vd/release@sha256:df4246451a50606d1a7206d305a947575cc7e29f029be395a8eef38257afdab3, toImage: registry.build01.ci.openshift.org/ci-op-6jp370vd/release@sha256:86033270e5e605f7eb9e193bb41f81ca8e1572286490d67d70892f04033c0193 hypershift_framework.go:459: Successfully created hostedcluster e2e-clusters-jk2rq/control-plane-upgrade-hgt6t in 29s hypershift_framework.go:256: skipping teardown, already called util.go:291: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-jk2rq/control-plane-upgrade-hgt6t in 2m39s util.go:308: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-hgt6t.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-hgt6t.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-hgt6t.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.219.156.236:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-hgt6t.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.219.156.236:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-hgt6t.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.72.137.105:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-hgt6t.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.219.156.236:443: connect: connection refused util.go:370: Successfully waited for a successful connection to the guest API server in 2m45.025s util.go:565: Successfully waited for 2 nodes to become ready in 30m21s util.go:598: Successfully waited for HostedCluster e2e-clusters-jk2rq/control-plane-upgrade-hgt6t to rollout in 3m33s util.go:2974: Successfully waited for HostedCluster e2e-clusters-jk2rq/control-plane-upgrade-hgt6t to have valid conditions in 3m6s
TestUpgradeControlPlane/Teardown
21m46.06s
journals.go:245: Successfully copied machine journals to /logs/artifacts/TestUpgradeControlPlane/machine-journals fixture.go:324: Failed to wait for infra resources in guest cluster to be deleted: operation error Resource Groups Tagging API: GetResources, context deadline exceeded hypershift_framework.go:475: archiving /logs/artifacts/TestUpgradeControlPlane/hostedcluster-control-plane-upgrade-hgt6t to /logs/artifacts/TestUpgradeControlPlane/hostedcluster.tar.gz