google cloud platform

Unable to connect to the server: dial tcp i/o timeout

This tutorial guides you on how to fix kubernetes error Unable to connect to the server: dial tcp i/o timeout while running the demo app using kubectl command.

Unable to connect to the server: dial tcp i/o timeout

I tried to build and push my first container image and then tried to run the demo image using the kubectl command with the following arguments as shown below.

$ kubectl run demo --image=sneppets/myhello --port=9999 --labels app=demo
Unable to connect to the server: dial tcp 35.225.192.78:443: i/o timeout

I got an error which says that Unable to connect to the the server: dial tcp i/o timeout. Ideally the expectation here is I should have got the following response which should say that “pod/demo created“.

pod/demo created

Troubleshooting : dial tcp i/o timeout error

First, I tried to check kubectl config by running the following command

$ kubectl config view

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://35.225.192.78
  name: gke_poised-shift-300712_us-central1-c_mob-viewer
contexts:
- context:
    cluster: gke_poised-shift-300712_us-central1-c_mob-viewer
    user: gke_poised-shift-300712_us-central1-c_mob-viewer
  name: gke_poised-shift-300712_us-central1-c_mob-viewer
current-context: gke_poised-shift-300712_us-central1-c_mob-viewer
kind: Config
preferences: {}
users:
- name: gke_poised-shift-300712_us-central1-c_mob-viewer
  user:
    auth-provider:
      config:
        access-token: ya29.A0AfH6SMAisOX_tYb3y8mIhJYwRdsb6k134QchHvJPCGfUirQ_hgFCnNR8S9xDEm-j2mW5Od-OSuWmVs7ekSW7tVCwQME9YrULr9cxit0I8y3vZWS1D5BoPZUVBYUdHKDiUUe3EeNQhe9McYu76RZMrpNGMkh9JMbDkbtnmR971DM3-dfMX6-3JX_SDNaPL_z3wzzoSPqjTBkWU_U9y9dmp2INVHrpqYdDHpdE3JlaH53vgVce01KTnD-Oilnb2yNX5Ks-_2g
        cmd-args: config config-helper --format=json
        cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
        expiry: "2021-02-11T06:06:09Z"
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

From the above response I understood that, even after tearing down clusters the old entries were not deleted from kubeconfig. Therefore, I followed the steps mentioned in the tutorial to Delete or unset clusters contexts and users entries from kubectl config.

$ kubectl config unset users.gke_poised-shift-300712_us-central1-c_mob-viewer

$ kubectl config unset contexts.gke_poised-shift-300712_us-central1-c_mob-viewer

$ kubectl config unset clusters.gke_poised-shift-300712_us-central1-c_mob-viewer

Then I tried to reset current-context manually using vi editor. Set value as ” ” for current-context.

$ vi /home/sneppets/.kube/config

Try checking whether all the configurations are unset as shown below.

$ kubectl config view

apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

Autogenerate kubeconfig

You might get error Kubernetes Error : did you specify the right host or port if you don’t generate a valid kubeconfig specific to your project settings as shown below.

$ gcloud container clusters get-credentials --zone us-central1-f myhello

Fetching cluster endpoint and auth data.
kubeconfig entry generated for myhello.

To check entries generated run the following command.

$ kubectl config view

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://34.66.18.243
  name: gke_poised-shift-300712_us-central1-f_myhello
contexts:
- context:
    cluster: gke_poised-shift-300712_us-central1-f_myhello
    user: gke_poised-shift-300712_us-central1-f_myhello
  name: gke_poised-shift-300712_us-central1-f_myhello
current-context: gke_poised-shift-300712_us-central1-f_myhello
kind: Config
preferences: {}
users:
- name: gke_poised-shift-300712_us-central1-f_myhello
  user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp

Once you generate kubeconfig entries for myhello, then try running the demo app now. Now you should not see any further errors.

$ kubectl run demo --image=nithunanda83/myhello --port=9999 --labels app=demo

pod/demo created

Running the demo app

$ kubectl get pods

NAME      READY   STATUS    RESTARTS   AGE
myhello   1/1     Running   0          17m

$ kubectl port-forward pod/myhello 9999:8888

Forwarding from 127.0.0.1:9999 -> 8888
Handling connection for 9999

That’s it you had learnt how to troubleshoot and fix the kubernetes error Unable to connect to the server: dial tcp i/o timeout.

Hope it helped 🙂

References

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments