google cloud platform

Delete or unset clusters contexts and users entries from kubectl config

This tutorial guides you on how to delete or unset clusters, contexts and users entries from kubectl config selectively. I am trying to use Google Cloud GKE to demonstrate on how to delete clusters and other entries from kubectl config.

Delete or unset clusters contexts and users entries from kubectl config

This is the common scenario where there are entries for clusters, contexts and users in kubectl config and you would like to delete those. That is when you try to run the app in kubernetes you might face Kubernetes Error : did you specify the right host or port ?.

Therefore, as mentioned in the above link you might face errors related to kubeconfig and you need to fix them as suggested in that tutorial.

Before you start performing delete or unset operation, first you need to know what are all the entries available in the kubectl config. First, try to run the following kubectl command to view the kubectl config. This command provides you the list of entries for cluster, contexts and users as shown below.

$ kubectl config view

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://35.225.192.78
  name: gke_poised-shift-300712_us-central1-c_mob-viewer
contexts:
- context:
    cluster: gke_poised-shift-300712_us-central1-c_mob-viewer    
    user: gke_poised-shift-300712_us-central1-c_mob-viewer
  name: gke_poised-shift-300712_us-central1-c_mob-viewer
current-context: gke_poised-shift-300712_us-central1-c_mob-viewer
kind: Config
preferences: {}
users:
- name: gke_poised-shift-300712_us-central1-c_mob-viewer
  user:
    auth-provider:
      config:
        access-token: ya29.A0AfH6SMAisOX_tYb3y8mIhJYwRdsb6k134QchHvJPCGfUirQ_hgFCnNR8S9xDEm-j2mW5Od-OSuWmVs7ekSW7tVCwQME9YrULr9cxit0I8y3vZWS1D5BoPZUVBYUdHKDiUUe3EeNQhe9McYu76RZMrpNGMkh9JMbDkbtnmR971DM3-dfMX6-3JX_SDNaPL_z3wzzoSPqjTBkWU_U9y9dmp2INVHrpqYdDHpdE3JlaH53vgVce01KTnD-Oilnb2yNX5Ks-_2g
        cmd-args: config config-helper --format=json
        cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
        expiry: "2021-02-11T06:06:09Z"
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

Let’s say you would like to unset user entries for gke_poised-shift-300712_us-central1-c_mobile-viewer , then you need to run the following command

$ kubectl config unset users.gke_poised-shift-300712_us-central1-c_mob-viewer

Property "users.gke_poised-shift-300712_us-central1-c_mob-viewer" unset.

To unset contexts run the following command.

$ kubectl config unset contexts.gke_poised-shift-300712_us-central1-c_mob-viewer

Property "contexts.gke_poised-shift-300712_us-central1-c_mob-viewer" unset.

Run the following command to unset or delete clusters.

$ kubectl config unset clusters.gke_poised-shift-300712_us-central1-c_mob-viewer

Property "clusters.gke_poised-shift-300712_us-central1-c_mob-viewer" unset.

Ideally, if you shutdown and clean up your clusters, it should take care of deleting associated entries in the kubectl config. If it does not, then follow the above steps to delete the entries from kubeconfig.

Autogenerate kubeconfig entries

Once you unset or delete the unwanted entries from kubeconfig. Then you can autogenerate the required config for the kubernetes using the following gcloud command.

$ gcloud container clusters get-credentials --zone us-central1-f myhello

Fetching cluster endpoint and auth data.
kubeconfig entry generated for myhello.

Now, the entry is generated. You can check the entries by running the kubectl config view command again to check for the valid entries.

Hope it helped 🙂

References

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments