Check the status of available nodes and now we don’t see ds06 again
# kubectl get node
NAME STATUS ROLES AGE VERSION
ds01.ecs.openstack.com Ready control-plane,etcd,master 22d v1.21.8+rke2r2
ds02.ecs.openstack.com Ready control-plane,etcd,master 22d v1.21.8+rke2r2
ds03.ecs.openstack.com Ready control-plane,etcd,master 22d v1.21.8+rke2r2
ds04.ecs.openstack.com Ready <none> 22d v1.21.8+rke2r2
ds05.ecs.openstack.com Ready <none> 22d v1.21.8+rke2r2
Pods in terminating state can be removed from the apiserver after the failed Node is manually deleted.
4. Remove node From ECS Cluster
In the Cloudera Manager Admin Console, go to Hosts > All Hosts. Select the hosts to delete.
Select Actions for Selected > Remove From Cluster. The Remove Hosts From Cluster dialog box displays.
Leave the selections to decommission roles and skip removing the Cloudera Management Service roles. Click Confirm to proceed with removing the selected hosts.
5. Remove node From Cloudera Manager
In the Cloudera Manager Admin Console, go to Hosts > All Hosts. Select the hosts to delete.
Select Actions for Selected > Remove from Cloudera Manager.
Click Confirm to remove the failed host from Cloudera Manager.
6. Destroy node ds06
If you want to replace this failed node with a new one, you need to destroy node ds06, then see the steps in the docs.