PR #6745 - 11-06 06:14

Job: hypershift
FAILURE
← Back to Test Grid

Test Summary

203
Total Tests
150
Passed
26
Failed
27
Skipped

Failed Tests

TestAutoscaling
45m45.72s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-wv5qn/autoscaling-ztl89 in 22s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster autoscaling-ztl89 util.go:2896: Failed to wait for HostedCluster e2e-clusters-wv5qn/autoscaling-ztl89 to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-wv5qn/autoscaling-ztl89 invalid at RV 97601 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorNotAvailable(Unable to apply 4.21.0-0.ci-2025-11-06-063745-test-ci-op-n3bzw5n9-latest: the cluster operator monitoring is not available) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorNotAvailable(Cluster operator monitoring is not available) hypershift_framework.go:249: skipping postTeardown() hypershift_framework.go:230: skipping teardown, already called
TestAutoscaling/ValidateHostedCluster
39m50.05s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-wv5qn/autoscaling-ztl89 in 1m36s util.go:298: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztl89.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-autoscaling-ztl89.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-autoscaling-ztl89.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.214.212.223:443: i/o timeout util.go:360: Successfully waited for a successful connection to the guest API server in 1m23.05s util.go:542: Successfully waited for 1 nodes to become ready in 6m51s eventually.go:104: Failed to get *v1beta1.HostedCluster: client rate limiter Wait returned an error: context deadline exceeded util.go:575: Failed to wait for HostedCluster e2e-clusters-wv5qn/autoscaling-ztl89 to rollout in 30m0s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-wv5qn/autoscaling-ztl89 invalid at RV 97601 after 30m0s: wanted most recent version history to have state Completed, has state Partial util.go:575: *v1beta1.HostedCluster e2e-clusters-wv5qn/autoscaling-ztl89 conditions: util.go:575: ValidAWSIdentityProvider=True: AsExpected(All is well) util.go:575: ClusterVersionReleaseAccepted=True: PayloadLoaded(Payload loaded version="4.21.0-0.ci-2025-11-06-063745-test-ci-op-n3bzw5n9-latest" image="registry.build01.ci.openshift.org/ci-op-n3bzw5n9/release@sha256:9fc947402cbf4c62e104bd21c9b9694284fc815610f1a902837934280832012d" architecture="amd64") util.go:575: ClusterVersionUpgradeable=False: UpdateInProgress(An update is already in progress and the details are in the Progressing condition) util.go:575: ClusterVersionAvailable=False: FromClusterVersion util.go:575: ClusterVersionSucceeding=False: ClusterOperatorNotAvailable(Cluster operator monitoring is not available) util.go:575: ClusterVersionProgressing=True: ClusterOperatorNotAvailable(Unable to apply 4.21.0-0.ci-2025-11-06-063745-test-ci-op-n3bzw5n9-latest: the cluster operator monitoring is not available) util.go:575: Degraded=False: AsExpected(The hosted cluster is not degraded) util.go:575: EtcdAvailable=True: QuorumAvailable util.go:575: KubeAPIServerAvailable=True: AsExpected(Kube APIServer deployment is available) util.go:575: InfrastructureReady=True: AsExpected(All is well) util.go:575: ExternalDNSReachable=True: AsExpected(All is well) util.go:575: ValidHostedControlPlaneConfiguration=True: AsExpected(Configuration passes validation) util.go:575: ValidReleaseInfo=True: AsExpected(All is well) util.go:575: ValidIDPConfiguration=True: IDPConfigurationValid(Identity provider configuration is valid) util.go:575: HostedClusterRestoredFromBackup=Unknown: StatusUnknown(Condition not found in the HCP) util.go:575: Available=True: AsExpected(The hosted control plane is available) util.go:575: AWSEndpointAvailable=True: AWSSuccess(All is well) util.go:575: AWSEndpointServiceAvailable=True: AWSSuccess(All is well) util.go:575: ValidConfiguration=True: AsExpected(Configuration passes validation) util.go:575: SupportedHostedCluster=True: AsExpected(HostedCluster is supported by operator configuration) util.go:575: IgnitionEndpointAvailable=True: AsExpected(Ignition server deployment is available) util.go:575: ReconciliationActive=True: AsExpected(Reconciliation active on resource) util.go:575: ValidReleaseImage=True: AsExpected(Release image is valid) util.go:575: Progressing=False: AsExpected(HostedCluster is at expected version) util.go:575: PlatformCredentialsFound=True: AsExpected(Required platform credentials are found) util.go:575: ValidOIDCConfiguration=True: AsExpected(OIDC configuration is valid) util.go:575: ReconciliationSucceeded=True: ReconciliatonSucceeded(Reconciliation completed successfully) util.go:575: ValidAWSKMSConfig=Unknown: StatusUnknown(AWS KMS is not configured) util.go:575: AWSDefaultSecurityGroupCreated=True: AsExpected(All is well) util.go:575: ClusterVersionRetrievedUpdates=False: NoChannel(The update channel has not been configured.)
TestCreateCluster
59m53.69s
create_cluster_test.go:2185: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-mztgf/create-cluster-fc4tg in 1m8s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster create-cluster-fc4tg util.go:2896: Failed to wait for HostedCluster e2e-clusters-mztgf/create-cluster-fc4tg to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-mztgf/create-cluster-fc4tg invalid at RV 120889 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorNotAvailable(Cluster operator monitoring is not available) eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorNotAvailable(Unable to apply 4.21.0-0.ci-2025-11-06-063745-test-ci-op-n3bzw5n9-latest: the cluster operator monitoring is not available) hypershift_framework.go:249: skipping postTeardown() hypershift_framework.go:230: skipping teardown, already called
TestCreateCluster/ValidateHostedCluster
53m10.04s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-mztgf/create-cluster-fc4tg in 1m33s util.go:298: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-fc4tg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-create-cluster-fc4tg.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-fc4tg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.91.134.39:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-fc4tg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.173.96.254:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-fc4tg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.91.134.39:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-create-cluster-fc4tg.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 13.223.232.125:443: connect: connection refused util.go:360: Successfully waited for a successful connection to the guest API server in 2m10.025s util.go:542: Successfully waited for 3 nodes to become ready in 19m27s util.go:575: Failed to wait for HostedCluster e2e-clusters-mztgf/create-cluster-fc4tg to rollout in 30m0s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-mztgf/create-cluster-fc4tg invalid at RV 120889 after 30m0s: wanted most recent version history to have state Completed, has state Partial util.go:575: *v1beta1.HostedCluster e2e-clusters-mztgf/create-cluster-fc4tg conditions: util.go:575: ValidAWSIdentityProvider=True: AsExpected(All is well) util.go:575: ClusterVersionUpgradeable=False: UpdateInProgress(An update is already in progress and the details are in the Progressing condition) util.go:575: ClusterVersionAvailable=False: FromClusterVersion util.go:575: ClusterVersionSucceeding=False: ClusterOperatorNotAvailable(Cluster operator monitoring is not available) util.go:575: ClusterVersionProgressing=True: ClusterOperatorNotAvailable(Unable to apply 4.21.0-0.ci-2025-11-06-063745-test-ci-op-n3bzw5n9-latest: the cluster operator monitoring is not available) util.go:575: ClusterVersionReleaseAccepted=True: PayloadLoaded(Payload loaded version="4.21.0-0.ci-2025-11-06-063745-test-ci-op-n3bzw5n9-latest" image="registry.build01.ci.openshift.org/ci-op-n3bzw5n9/release@sha256:9fc947402cbf4c62e104bd21c9b9694284fc815610f1a902837934280832012d" architecture="amd64") util.go:575: Degraded=False: AsExpected(The hosted cluster is not degraded) util.go:575: EtcdAvailable=True: QuorumAvailable util.go:575: KubeAPIServerAvailable=True: AsExpected(Kube APIServer deployment is available) util.go:575: InfrastructureReady=True: AsExpected(All is well) util.go:575: ExternalDNSReachable=True: AsExpected(All is well) util.go:575: ValidHostedControlPlaneConfiguration=True: AsExpected(Configuration passes validation) util.go:575: ValidReleaseInfo=True: AsExpected(All is well) util.go:575: ValidIDPConfiguration=True: IDPConfigurationValid(Identity provider configuration is valid) util.go:575: HostedClusterRestoredFromBackup=Unknown: StatusUnknown(Condition not found in the HCP) util.go:575: Available=True: AsExpected(The hosted control plane is available) util.go:575: AWSEndpointAvailable=True: AWSSuccess(All is well) util.go:575: AWSEndpointServiceAvailable=True: AWSSuccess(All is well) util.go:575: ValidConfiguration=True: AsExpected(Configuration passes validation) util.go:575: SupportedHostedCluster=True: AsExpected(HostedCluster is supported by operator configuration) util.go:575: IgnitionEndpointAvailable=True: AsExpected(Ignition server deployment is available) util.go:575: ReconciliationActive=True: AsExpected(Reconciliation active on resource) util.go:575: ValidReleaseImage=True: AsExpected(Release image is valid) util.go:575: Progressing=False: AsExpected(HostedCluster is at expected version) util.go:575: PlatformCredentialsFound=True: AsExpected(Required platform credentials are found) util.go:575: ValidOIDCConfiguration=True: AsExpected(OIDC configuration is valid) util.go:575: ReconciliationSucceeded=True: ReconciliatonSucceeded(Reconciliation completed successfully) util.go:575: ValidAWSKMSConfig=Unknown: StatusUnknown(AWS KMS is not configured) util.go:575: AWSDefaultSecurityGroupCreated=True: AsExpected(All is well) util.go:575: ClusterVersionRetrievedUpdates=False: NoChannel(The update channel has not been configured.)
TestCreateClusterCustomConfig
54m44.78s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-gkthc/custom-config-w9c2j in 44s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster custom-config-w9c2j util.go:2896: Failed to wait for HostedCluster e2e-clusters-gkthc/custom-config-w9c2j to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-gkthc/custom-config-w9c2j invalid at RV 92607 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorNotAvailable(Unable to apply 4.21.0-0.ci-2025-11-06-063745-test-ci-op-n3bzw5n9-latest: the cluster operator monitoring is not available) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorNotAvailable(Cluster operator monitoring is not available) hypershift_framework.go:249: skipping postTeardown() hypershift_framework.go:230: skipping teardown, already called
TestCreateClusterCustomConfig/ValidateHostedCluster
49m3.51s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-gkthc/custom-config-w9c2j in 1m6s util.go:298: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-w9c2j.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-custom-config-w9c2j.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-w9c2j.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.94.120.62:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-custom-config-w9c2j.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.207.187.235:443: i/o timeout util.go:360: Successfully waited for a successful connection to the guest API server in 2m15.5s util.go:542: Successfully waited for 2 nodes to become ready in 15m42s util.go:575: Failed to wait for HostedCluster e2e-clusters-gkthc/custom-config-w9c2j to rollout in 30m0s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-gkthc/custom-config-w9c2j invalid at RV 92607 after 30m0s: wanted most recent version history to have state Completed, has state Partial util.go:575: *v1beta1.HostedCluster e2e-clusters-gkthc/custom-config-w9c2j conditions: util.go:575: ValidAWSIdentityProvider=True: AsExpected(All is well) util.go:575: ClusterVersionReleaseAccepted=True: PayloadLoaded(Payload loaded version="4.21.0-0.ci-2025-11-06-063745-test-ci-op-n3bzw5n9-latest" image="registry.build01.ci.openshift.org/ci-op-n3bzw5n9/release@sha256:9fc947402cbf4c62e104bd21c9b9694284fc815610f1a902837934280832012d" architecture="amd64") util.go:575: ClusterVersionUpgradeable=False: UpdateInProgress(An update is already in progress and the details are in the Progressing condition) util.go:575: ClusterVersionAvailable=False: FromClusterVersion util.go:575: ClusterVersionSucceeding=False: ClusterOperatorNotAvailable(Cluster operator monitoring is not available) util.go:575: ClusterVersionProgressing=True: ClusterOperatorNotAvailable(Unable to apply 4.21.0-0.ci-2025-11-06-063745-test-ci-op-n3bzw5n9-latest: the cluster operator monitoring is not available) util.go:575: Degraded=False: AsExpected(The hosted cluster is not degraded) util.go:575: EtcdAvailable=True: QuorumAvailable util.go:575: KubeAPIServerAvailable=True: AsExpected(Kube APIServer deployment is available) util.go:575: InfrastructureReady=True: AsExpected(All is well) util.go:575: ExternalDNSReachable=True: AsExpected(All is well) util.go:575: ValidHostedControlPlaneConfiguration=True: AsExpected(Configuration passes validation) util.go:575: ValidReleaseInfo=True: AsExpected(All is well) util.go:575: ValidIDPConfiguration=True: IDPConfigurationValid(Identity provider configuration is valid) util.go:575: HostedClusterRestoredFromBackup=Unknown: StatusUnknown(Condition not found in the HCP) util.go:575: Available=True: AsExpected(The hosted control plane is available) util.go:575: AWSEndpointAvailable=True: AWSSuccess(All is well) util.go:575: AWSEndpointServiceAvailable=True: AWSSuccess(All is well) util.go:575: ValidConfiguration=True: AsExpected(Configuration passes validation) util.go:575: SupportedHostedCluster=True: AsExpected(HostedCluster is supported by operator configuration) util.go:575: IgnitionEndpointAvailable=True: AsExpected(Ignition server deployment is available) util.go:575: ReconciliationActive=True: AsExpected(Reconciliation active on resource) util.go:575: ValidReleaseImage=True: AsExpected(Release image is valid) util.go:575: Progressing=False: AsExpected(HostedCluster is at expected version) util.go:575: PlatformCredentialsFound=True: AsExpected(Required platform credentials are found) util.go:575: ValidOIDCConfiguration=True: AsExpected(OIDC configuration is valid) util.go:575: ReconciliationSucceeded=True: ReconciliatonSucceeded(Reconciliation completed successfully) util.go:575: ValidAWSKMSConfig=True: AsExpected(All is well) util.go:575: ClusterVersionRetrievedUpdates=False: NoChannel(The update channel has not been configured.) util.go:575: AWSDefaultSecurityGroupCreated=True: AsExpected(All is well)
TestCreateClusterPrivate
27m51.21s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-bzltd/private-kbg6d in 36s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster private-kbg6d util.go:2896: Successfully waited for HostedCluster e2e-clusters-bzltd/private-kbg6d to have valid conditions in 0s hypershift_framework.go:249: skipping postTeardown() hypershift_framework.go:230: skipping teardown, already called
TestCreateClusterPrivate/ValidateHostedCluster
21m0.15s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-bzltd/private-kbg6d in 1m21s util.go:298: Successfully waited for kubeconfig secret to have data in 0s util.go:672: Successfully waited for NodePools for HostedCluster e2e-clusters-bzltd/private-kbg6d to have all of their desired nodes in 12m3s util.go:575: Successfully waited for HostedCluster e2e-clusters-bzltd/private-kbg6d to rollout in 7m36s util.go:2896: Successfully waited for HostedCluster e2e-clusters-bzltd/private-kbg6d to have valid conditions in 0s
TestCreateClusterPrivate/ValidateHostedCluster/EnsureNoCrashingPods
40ms
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-bzltd/private-kbg6d in 0s util.go:298: Successfully waited for kubeconfig secret to have data in 0s util.go:759: Container manager in pod capi-provider-6b8dbbd4db-d96m2 has a restartCount > 0 (1)
TestCreateClusterProxy
47m52.57s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-g2x5v/proxy-qqbvd in 29s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster proxy-qqbvd util.go:2896: Failed to wait for HostedCluster e2e-clusters-g2x5v/proxy-qqbvd to have valid conditions in 2s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-g2x5v/proxy-qqbvd invalid at RV 92687 after 2s: eventually.go:227: - incorrect condition: wanted ClusterVersionAvailable=True, got ClusterVersionAvailable=False: FromClusterVersion eventually.go:227: - incorrect condition: wanted ClusterVersionProgressing=False, got ClusterVersionProgressing=True: ClusterOperatorNotAvailable(Unable to apply 4.21.0-0.ci-2025-11-06-063745-test-ci-op-n3bzw5n9-latest: the cluster operator monitoring is not available) eventually.go:227: - incorrect condition: wanted ClusterVersionSucceeding=True, got ClusterVersionSucceeding=False: ClusterOperatorNotAvailable(Cluster operator monitoring is not available) hypershift_framework.go:249: skipping postTeardown() hypershift_framework.go:230: skipping teardown, already called
TestCreateClusterProxy/ValidateHostedCluster
41m9.05s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-g2x5v/proxy-qqbvd in 1m21s util.go:298: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-qqbvd.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-proxy-qqbvd.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-qqbvd.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 100.25.146.142:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-proxy-qqbvd.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 34.201.164.159:443: i/o timeout util.go:360: Successfully waited for a successful connection to the guest API server in 1m42.025s util.go:542: Successfully waited for 2 nodes to become ready in 8m6s eventually.go:104: Failed to get *v1beta1.HostedCluster: client rate limiter Wait returned an error: context deadline exceeded util.go:575: Failed to wait for HostedCluster e2e-clusters-g2x5v/proxy-qqbvd to rollout in 30m0s: context deadline exceeded eventually.go:224: observed *v1beta1.HostedCluster e2e-clusters-g2x5v/proxy-qqbvd invalid at RV 92687 after 30m0s: wanted most recent version history to have state Completed, has state Partial util.go:575: *v1beta1.HostedCluster e2e-clusters-g2x5v/proxy-qqbvd conditions: util.go:575: ValidAWSIdentityProvider=True: AsExpected(All is well) util.go:575: ClusterVersionSucceeding=False: ClusterOperatorNotAvailable(Cluster operator monitoring is not available) util.go:575: ClusterVersionProgressing=True: ClusterOperatorNotAvailable(Unable to apply 4.21.0-0.ci-2025-11-06-063745-test-ci-op-n3bzw5n9-latest: the cluster operator monitoring is not available) util.go:575: ClusterVersionReleaseAccepted=True: PayloadLoaded(Payload loaded version="4.21.0-0.ci-2025-11-06-063745-test-ci-op-n3bzw5n9-latest" image="registry.build01.ci.openshift.org/ci-op-n3bzw5n9/release@sha256:9fc947402cbf4c62e104bd21c9b9694284fc815610f1a902837934280832012d" architecture="amd64") util.go:575: ClusterVersionUpgradeable=False: UpdateInProgress(An update is already in progress and the details are in the Progressing condition) util.go:575: ClusterVersionAvailable=False: FromClusterVersion util.go:575: Degraded=False: AsExpected(The hosted cluster is not degraded) util.go:575: EtcdAvailable=True: QuorumAvailable util.go:575: KubeAPIServerAvailable=True: AsExpected(Kube APIServer deployment is available) util.go:575: InfrastructureReady=True: AsExpected(All is well) util.go:575: ExternalDNSReachable=True: AsExpected(All is well) util.go:575: ValidHostedControlPlaneConfiguration=True: AsExpected(Configuration passes validation) util.go:575: ValidReleaseInfo=True: AsExpected(All is well) util.go:575: ValidIDPConfiguration=True: IDPConfigurationValid(Identity provider configuration is valid) util.go:575: HostedClusterRestoredFromBackup=Unknown: StatusUnknown(Condition not found in the HCP) util.go:575: Available=True: AsExpected(The hosted control plane is available) util.go:575: AWSEndpointAvailable=True: AWSSuccess(All is well) util.go:575: AWSEndpointServiceAvailable=True: AWSSuccess(All is well) util.go:575: ValidConfiguration=True: AsExpected(Configuration passes validation) util.go:575: SupportedHostedCluster=True: AsExpected(HostedCluster is supported by operator configuration) util.go:575: IgnitionEndpointAvailable=True: AsExpected(Ignition server deployment is available) util.go:575: ReconciliationActive=True: AsExpected(Reconciliation active on resource) util.go:575: ValidReleaseImage=True: AsExpected(Release image is valid) util.go:575: Progressing=False: AsExpected(HostedCluster is at expected version) util.go:575: PlatformCredentialsFound=True: AsExpected(Required platform credentials are found) util.go:575: ValidOIDCConfiguration=True: AsExpected(OIDC configuration is valid) util.go:575: ReconciliationSucceeded=True: ReconciliatonSucceeded(Reconciliation completed successfully) util.go:575: ValidAWSKMSConfig=Unknown: StatusUnknown(AWS KMS is not configured) util.go:575: AWSDefaultSecurityGroupCreated=True: AsExpected(All is well) util.go:575: ClusterVersionRetrievedUpdates=False: NoChannel(The update channel has not been configured.)
TestCreateClusterRequestServingIsolation
27m0.84s
requestserving.go:105: Created request serving nodepool clusters/b4377f9a8e2416b4928e-mgmt-reqserving-8zx2l requestserving.go:105: Created request serving nodepool clusters/b4377f9a8e2416b4928e-mgmt-reqserving-q7fl9 requestserving.go:113: Created non request serving nodepool clusters/b4377f9a8e2416b4928e-mgmt-non-reqserving-m2jb5 requestserving.go:113: Created non request serving nodepool clusters/b4377f9a8e2416b4928e-mgmt-non-reqserving-zxrjq requestserving.go:113: Created non request serving nodepool clusters/b4377f9a8e2416b4928e-mgmt-non-reqserving-4gtnh util.go:542: Successfully waited for 1 nodes to become ready for NodePool clusters/b4377f9a8e2416b4928e-mgmt-reqserving-8zx2l in 3m54.1s util.go:542: Successfully waited for 1 nodes to become ready for NodePool clusters/b4377f9a8e2416b4928e-mgmt-reqserving-q7fl9 in 0s util.go:542: Successfully waited for 1 nodes to become ready for NodePool clusters/b4377f9a8e2416b4928e-mgmt-non-reqserving-m2jb5 in 45s util.go:542: Successfully waited for 1 nodes to become ready for NodePool clusters/b4377f9a8e2416b4928e-mgmt-non-reqserving-zxrjq in 100ms util.go:542: Successfully waited for 1 nodes to become ready for NodePool clusters/b4377f9a8e2416b4928e-mgmt-non-reqserving-4gtnh in 3s create_cluster_test.go:2328: Sufficient zones available for InfrastructureAvailabilityPolicy HighlyAvailable hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-8rcb7/request-serving-isolation-kg6c9 in 24s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster request-serving-isolation-kg6c9 util.go:2896: Successfully waited for HostedCluster e2e-clusters-8rcb7/request-serving-isolation-kg6c9 to have valid conditions in 0s hypershift_framework.go:249: skipping postTeardown() requestserving.go:132: Tearing down custom nodepool clusters/b4377f9a8e2416b4928e-mgmt-reqserving-8zx2l requestserving.go:132: Tearing down custom nodepool clusters/b4377f9a8e2416b4928e-mgmt-reqserving-q7fl9 requestserving.go:132: Tearing down custom nodepool clusters/b4377f9a8e2416b4928e-mgmt-non-reqserving-m2jb5 requestserving.go:132: Tearing down custom nodepool clusters/b4377f9a8e2416b4928e-mgmt-non-reqserving-zxrjq requestserving.go:132: Tearing down custom nodepool clusters/b4377f9a8e2416b4928e-mgmt-non-reqserving-4gtnh hypershift_framework.go:230: skipping teardown, already called
TestCreateClusterRequestServingIsolation/ValidateHostedCluster
14m48.4s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8rcb7/request-serving-isolation-kg6c9 in 1m36s util.go:298: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-kg6c9.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-request-serving-isolation-kg6c9.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-kg6c9.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 44.198.208.199:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-request-serving-isolation-kg6c9.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.94.139.196:443: i/o timeout util.go:360: Successfully waited for a successful connection to the guest API server in 1m25.05s util.go:542: Successfully waited for 3 nodes to become ready in 7m39s util.go:575: Successfully waited for HostedCluster e2e-clusters-8rcb7/request-serving-isolation-kg6c9 to rollout in 4m3s util.go:2896: Successfully waited for HostedCluster e2e-clusters-8rcb7/request-serving-isolation-kg6c9 to have valid conditions in 0s
TestCreateClusterRequestServingIsolation/ValidateHostedCluster/EnsureNoCrashingPods
30ms
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-8rcb7/request-serving-isolation-kg6c9 in 0s util.go:298: Successfully waited for kubeconfig secret to have data in 0s util.go:759: Container manager in pod capi-provider-5d7ddd6c85-9pdfw has a restartCount > 0 (1)
TestNodePool
0s
TestNodePool/HostedCluster0
1h5m43.2s
hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-bmtkm/node-pool-kj75l in 38s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster node-pool-kj75l util.go:2896: Successfully waited for HostedCluster e2e-clusters-bmtkm/node-pool-kj75l to have valid conditions in 0s hypershift_framework.go:249: skipping postTeardown() hypershift_framework.go:230: skipping teardown, already called
TestNodePool/HostedCluster0/Main
10ms
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-bmtkm/node-pool-kj75l in 0s util.go:298: Successfully waited for kubeconfig secret to have data in 0s util.go:360: Successfully waited for a successful connection to the guest API server in 0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace
30m52.12s
nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest util.go:542: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-ntomachineconfig-inplace in 26m12s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-ntomachineconfig-inplace to have correct status in 0s util.go:456: Successfully waited for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-ntomachineconfig-inplace to start config update in 15s util.go:472: Successfully waited for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-ntomachineconfig-inplace to finish config update in 4m20s nodepool_machineconfig_test.go:166: Successfully waited for all pods in the DaemonSet kube-system/node-pool-kj75l-test-ntomachineconfig-inplace to be ready in 5s util.go:542: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-ntomachineconfig-inplace in 0s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-ntomachineconfig-inplace to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigAppliedInPlace/EnsureNoCrashingPods
30ms
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-bmtkm/node-pool-kj75l in 0s util.go:298: Successfully waited for kubeconfig secret to have data in 0s util.go:759: Container hosted-cluster-config-operator in pod hosted-cluster-config-operator-84855cdc6f-n89xb has a restartCount > 0 (1)
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut
35m7.38s
nodepool_nto_machineconfig_test.go:68: Starting test NTOMachineConfigRolloutTest util.go:542: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-ntomachineconfig-replace in 26m12s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-ntomachineconfig-replace to have correct status in 0s util.go:456: Successfully waited for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-ntomachineconfig-replace to start config update in 15s util.go:472: Successfully waited for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-ntomachineconfig-replace to finish config update in 8m40s nodepool_machineconfig_test.go:166: Successfully waited for all pods in the DaemonSet kube-system/node-pool-kj75l-test-ntomachineconfig-replace to be ready in 0s util.go:542: Successfully waited for 2 nodes to become ready for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-ntomachineconfig-replace in 0s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-ntomachineconfig-replace to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNTOMachineConfigGetsRolledOut/EnsureNoCrashingPods
20ms
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-bmtkm/node-pool-kj75l in 0s util.go:298: Successfully waited for kubeconfig secret to have data in 0s util.go:759: Container hosted-cluster-config-operator in pod hosted-cluster-config-operator-84855cdc6f-n89xb has a restartCount > 0 (1)
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout
12m15.13s
nodepool_machineconfig_test.go:55: Starting test NodePoolMachineconfigRolloutTest util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-machineconfig in 5m0s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-machineconfig to have correct status in 0s util.go:456: Successfully waited for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-machineconfig to start config update in 15s util.go:472: Successfully waited for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-machineconfig to finish config update in 7m0s nodepool_machineconfig_test.go:166: Successfully waited for all pods in the DaemonSet kube-system/machineconfig-update-checker-replace to be ready in 0s util.go:542: Successfully waited for 1 nodes to become ready for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-machineconfig in 0s nodepool_test.go:358: Successfully waited for NodePool e2e-clusters-bmtkm/node-pool-kj75l-test-machineconfig to have correct status in 0s
TestNodePool/HostedCluster0/Main/TestNodepoolMachineconfigGetsRolledout/EnsureNoCrashingPods
20ms
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-bmtkm/node-pool-kj75l in 0s util.go:298: Successfully waited for kubeconfig secret to have data in 0s util.go:759: Container hosted-cluster-config-operator in pod hosted-cluster-config-operator-84855cdc6f-n89xb has a restartCount > 0 (1)
TestUpgradeControlPlane
37m18.76s
control_plane_upgrade_test.go:26: Starting control plane upgrade test. FromImage: registry.build01.ci.openshift.org/ci-op-n3bzw5n9/release@sha256:806106e231f7ef70bf59e02d318b05f90c79ea53c2d3425560845b64e61b03a0, toImage: registry.build01.ci.openshift.org/ci-op-n3bzw5n9/release@sha256:9fc947402cbf4c62e104bd21c9b9694284fc815610f1a902837934280832012d hypershift_framework.go:430: Successfully created hostedcluster e2e-clusters-pqxlm/control-plane-upgrade-jp5hv in 39s hypershift_framework.go:119: Summarizing unexpected conditions for HostedCluster control-plane-upgrade-jp5hv util.go:2896: Successfully waited for HostedCluster e2e-clusters-pqxlm/control-plane-upgrade-jp5hv to have valid conditions in 0s hypershift_framework.go:249: skipping postTeardown() hypershift_framework.go:230: skipping teardown, already called
TestUpgradeControlPlane/ValidateHostedCluster
29m56.31s
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pqxlm/control-plane-upgrade-jp5hv in 2m21s util.go:298: Successfully waited for kubeconfig secret to have data in 0s eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-jp5hv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp: lookup api-control-plane-upgrade-jp5hv.service.ci.hypershift.devcluster.openshift.com on 172.30.0.10:53: no such host eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-jp5hv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.92.145:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-jp5hv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.94.241.201:443: i/o timeout eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-jp5hv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.92.145:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-jp5hv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.94.241.201:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-jp5hv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.92.145:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-jp5hv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.94.241.201:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-jp5hv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.92.145:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-jp5hv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.94.241.201:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-jp5hv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.92.145:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-jp5hv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.94.241.201:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-jp5hv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 52.2.54.30:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-jp5hv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 98.94.241.201:443: connect: connection refused eventually.go:104: Failed to get *v1.SelfSubjectReview: Post "https://api-control-plane-upgrade-jp5hv.service.ci.hypershift.devcluster.openshift.com:443/apis/authentication.k8s.io/v1/selfsubjectreviews": dial tcp 54.205.92.145:443: connect: connection refused util.go:360: Successfully waited for a successful connection to the guest API server in 2m27.025s util.go:542: Successfully waited for 2 nodes to become ready in 22m36s util.go:575: Successfully waited for HostedCluster e2e-clusters-pqxlm/control-plane-upgrade-jp5hv to rollout in 2m15s util.go:2896: Successfully waited for HostedCluster e2e-clusters-pqxlm/control-plane-upgrade-jp5hv to have valid conditions in 12s
TestUpgradeControlPlane/ValidateHostedCluster/EnsureNoCrashingPods
30ms
util.go:281: Successfully waited for kubeconfig to be published for HostedCluster e2e-clusters-pqxlm/control-plane-upgrade-jp5hv in 0s util.go:298: Successfully waited for kubeconfig secret to have data in 0s util.go:759: Container manager in pod capi-provider-9b699cbdd-22p9p has a restartCount > 0 (2)