PR #6745 - 11-06 09:06

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

420
Total Tests
381
Passed
11
Failed
28
Skipped

Failed Tests

TestAutoscaling
1h10m43.6s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-n5xc2/autoscaling-m996q in 53s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster autoscaling-m996q util.go:2896: Successfully waited for HostedCluster e2e-clusters-n5xc2/autoscaling-m996q to have valid conditions in 0s hypershift_framework.go:249: skipping postTeardown() hypershift_framework.go:230: skipping teardown, already called
TestAutoscaling/Main
47m40.13s
TestAutoscaling/Main/TestAutoscalingBalancing
22m55.07s
autoscaling_test.go:161: Starting balancing scale-up test autoscaling_test.go:180: Created additional nodepool: autoscaling-m996q-us-east-1a-additional util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-n5xc2/autoscaling-m996q in 0s util.go:298: Successfully waited for kubeconfig secret to have data in 0s util.go:360: Successfully waited for a successful connection to the guest API server in 0s util.go:542: Successfully waited for 2 nodes to become ready in 5m3s autoscaling_test.go:218: Successfully waited for default nodepool autoscaling to be enabled in 0s autoscaling_test.go:232: Successfully waited for additional nodepool autoscaling to be enabled in 0s autoscaling_test.go:249: Successfully waited for autoscaler deployment to have autoscaling settings and be ready in 10s autoscaling_test.go:282: Created workload. Node: ip-10-0-3-145.ec2.internal, memcapacity: 6674148Ki, workload memory request: 3417163800 autoscaling_test.go:286: Waiting for 4 nodes to become ready... util.go:542: Successfully waited for 4 nodes to become ready in 7m42s autoscaling_test.go:288: Successfully reached 4 nodes eventually.go:258: Failed to get **v1beta1.NodePool: client rate limiter Wait returned an error: context deadline exceeded autoscaling_test.go:290: Failed to wait for both nodepools (autoscaling-m996q-us-east-1a and autoscaling-m996q-us-east-1a-additional) to have 2 replicas each in 10m0s: context deadline exceeded eventually.go:383: observed invalid **v1beta1.NodePool state after 10m0s eventually.go:400: - observed **v1beta1.NodePool collection invalid: nodepools replicas are 1 and 3, want both 2
TestCreateClusterCustomConfig
45m59.34s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-rdztc/custom-config-n9hzw in 50s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster custom-config-n9hzw util.go:2896: Failed to wait for HostedCluster e2e-clusters-rdztc/custom-config-n9hzw to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-rdztc/custom-config-n9hzw invalid at RV 105817 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorNotAvailable(Unable to apply 4.21.0-0.ci-2025-11-06-091638-test-ci-op-n3bzw5n9-latest: the cluster operator monitoring is not available) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorNotAvailable(Cluster operator monitoring is not available) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion hypershift_framework.go:249: skipping postTeardown() hypershift_framework.go:230: skipping teardown, already called
TestCreateClusterCustomConfig/ValidateHostedCluster
40m33.05s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-rdztc/custom-config-n9hzw in 2m6.025s util.go:298: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-n9hzw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-custom-config-n9hzw.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-n9hzw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.216.100.37:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-n9hzw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.192.70.232:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-n9hzw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.216.100.37:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-n9hzw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.192.70.232:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-n9hzw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.216.100.37:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-n9hzw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.192.70.232:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-n9hzw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.193.160.254:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-n9hzw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 3.216.100.37:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-n9hzw.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.192.70.232:443: connect: connection refused util.go:360: Successfully waited for a successful connection to the guest API server in 2m36.025s util.go:542: Successfully waited for 2 nodes to become ready in 5m51s util.go:575: Failed to wait for HostedCluster e2e-clusters-rdztc/custom-config-n9hzw to rollout in 30m0s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-rdztc/custom-config-n9hzw invalid at RV 105817 after 30m0s: wanted most recent version history to have state Completed, has state Partial util.go:575: *v1beta1.HostedCluster e2e-clusters-rdztc/custom-config-n9hzw conditions: util.go:575: ValidAWSIdentityProvider=True: AsExpected(All is well) util.go:575: ClusterVersionReleaseAccepted=True: PayloadLoaded(Payload loaded version="4.21.0-0.ci-2025-11-06-091638-test-ci-op-n3bzw5n9-latest" image="registry.build01.ci.openshift.org/ci-op-n3bzw5n9/release@sha256:38bd0f49b33d3b932bb6167c846f889ca25041dc4f3cad9c9835d15985968c05" architecture="amd64") util.go:575: ClusterVersionUpgradeable=False: UpdateInProgress(An update is already in progress and the details are in the Progressing condition) util.go:575: ClusterVersionAvailable=False: FromClusterVersion util.go:575: ClusterVersionSucceeding=False: ClusterOperatorNotAvailable(Cluster operator monitoring is not available) util.go:575: ClusterVersionProgressing=True: ClusterOperatorNotAvailable(Unable to apply 4.21.0-0.ci-2025-11-06-091638-test-ci-op-n3bzw5n9-latest: the cluster operator monitoring is not available) util.go:575: Degraded=False: AsExpected(The hosted cluster is not degraded) util.go:575: EtcdAvailable=True: QuorumAvailable util.go:575: KubeAPIServerAvailable=True: AsExpected(Kube APIServer deployment is available) util.go:575: InfrastructureReady=True: AsExpected(All is well) util.go:575: ExternalDNSReachable=True: AsExpected(All is well) util.go:575: ValidHostedControlPlaneConfiguration=True: AsExpected(Configuration passes validation) util.go:575: ValidReleaseInfo=True: AsExpected(All is well) util.go:575: ValidIDPConfiguration=True: IDPConfigurationValid(Identity provider configuration is valid) util.go:575: HostedClusterRestoredFromBackup=Unknown: StatusUnknown(Condition not found in the HCP) util.go:575: Available=True: AsExpected(The hosted control plane is available) util.go:575: AWSEndpointAvailable=True: AWSSuccess(All is well) util.go:575: AWSEndpointServiceAvailable=True: AWSSuccess(All is well) util.go:575: ValidConfiguration=True: AsExpected(Configuration passes validation) util.go:575: SupportedHostedCluster=True: AsExpected(HostedCluster is supported by operator configuration) util.go:575: IgnitionEndpointAvailable=True: AsExpected(Ignition server deployment is available) util.go:575: ReconciliationActive=True: AsExpected(Reconciliation active on resource) util.go:575: ValidReleaseImage=True: AsExpected(Release image is valid) util.go:575: Progressing=False: AsExpected(HostedCluster is at expected version) util.go:575: PlatformCredentialsFound=True: AsExpected(Required platform credentials are found) util.go:575: ValidOIDCConfiguration=True: AsExpected(OIDC configuration is valid) util.go:575: ReconciliationSucceeded=True: ReconciliatonSucceeded(Reconciliation completed successfully) util.go:575: ValidAWSKMSConfig=True: AsExpected(All is well) util.go:575: AWSDefaultSecurityGroupCreated=True: AsExpected(All is well) util.go:575: ClusterVersionRetrievedUpdates=False: NoChannel(The update channel has not been configured.)
TestNodePool
0s
TestNodePool/HostedCluster0
1h2m43.67s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-9rsm9/node-pool-v9h4h in 35s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster node-pool-v9h4h util.go:2896: Successfully waited for HostedCluster e2e-clusters-9rsm9/node-pool-v9h4h to have valid conditions in 0s hypershift_framework.go:249: skipping postTeardown() hypershift_framework.go:230: skipping teardown, already called
TestNodePool/HostedCluster0/Main
20ms
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-9rsm9/node-pool-v9h4h in 0s util.go:298: Successfully waited for kubeconfig secret to have data in 0s util.go:360: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster0/Main/TestNodePoolInPlaceUpgrade
21m54.12s
nodepool_upgrade_test.go:100: starting test NodePoolUpgradeTest util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-9rsm9/node-pool-v9h4h-test-inplaceupgrade in 11m54.1s nodepool_test.go:358: Failed to wait for NodePool e2e-clusters-9rsm9/node-pool-v9h4h-test-inplaceupgrade to have correct status in 10m0s: context deadline exceeded eventually.go:224: observed *v1beta1.NodePool e2e-clusters-9rsm9/node-pool-v9h4h-test-inplaceupgrade invalid at RV 71573 after 10m0s: incorrect condition: wanted ReachedIgnitionEndpoint=True, got ReachedIgnitionEndpoint=False: ignitionNotReached
TestNodePool/HostedCluster2
40m47.12s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-mp5bf/node-pool-486ds in 45s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster node-pool-486ds util.go:2896: Successfully waited for HostedCluster e2e-clusters-mp5bf/node-pool-486ds to have valid conditions in 0s hypershift_framework.go:249: skipping postTeardown() hypershift_framework.go:230: skipping teardown, already called
TestNodePool/HostedCluster2/EnsureHostedCluster
10m2.64s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mp5bf/node-pool-486ds in 0s util.go:298: Successfully waited for kubeconfig secret to have data in 0s util.go:360: Successfully waited for a successful connection to the guest API server in 25ms eventually.go:104: Failed to get *v1beta1.HostedCluster: client rate limiter Wait returned an error: context deadline exceeded util.go:2896: Failed to wait for HostedCluster e2e-clusters-mp5bf/node-pool-486ds to have valid conditions in 10m0s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-mp5bf/node-pool-486ds invalid at RV 103637 after 10m0s: eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorNotAvailable(Cluster operator monitoring is not available) eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorNotAvailable(Unable to apply 4.21.0-0.ci-2025-11-06-091638-test-ci-op-n3bzw5n9-latest: the cluster operator monitoring is not available)