New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: context deadline exceeded #2409
Comments
Could you get us the tiller logs for when this happened? |
I'm also seeing exactly the same issue - no logs at all from tiller, except the standard start up logs:
I'm also seeing this with Tiller v2.4.2 with Helm v2.4.2. Let me know if there's any other info you need... |
Full logs attached - note: I'm running 1.6.2 not 1.5.x here and getting the same issue. Issue also occurs with Tiller 2.4.2 with Helm 2.3.1. bash-4.3# kubectl get no
NAME STATUS AGE VERSION
gke-gs-staging-gke-default-pool-8a9f998e-ghwc Ready 2d v1.6.2
bash-4.3# kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
fluentd-gcp-v2.0-j2xff 1/1 Running 0 2d
heapster-v1.3.0-3440173064-bz5jr 2/2 Running 0 2d
kube-dns-3263495268-0hn0m 3/3 Running 0 2d
kube-dns-autoscaler-2362253537-dv7d7 1/1 Running 0 2d
kube-proxy-gke-gs-staging-gke-default-pool-8a9f998e-ghwc 1/1 Running 0 2d
kubernetes-dashboard-490794276-9ddww 1/1 Running 0 2d
l7-default-backend-3574702981-dhj0x 1/1 Running 0 2d
tiller-deploy-1651596238-v57zs 1/1 Running 0 9m
bash-4.3# kubectl logs -n kube-system tiller-deploy-1651596238-v57zs
Starting Tiller v2.4.2 (tls=false)
GRPC listening on :44134
Probes listening on :44135
Storage driver is ConfigMap
bash-4.3# helm version --debug
[debug] Created tunnel using local port: '36495'
[debug] SERVER: "localhost:36495"
Client: &version.Version{SemVer:"v2.4.2", GitCommit:"82d8e9498d96535cc6787a6a9194a76161d29b4c", GitTreeState:"clean"}
[debug] context deadline exceeded
Error: cannot connect to Tiller
bash-4.3# kubectl logs -n kube-system tiller-deploy-1651596238-v57zs
Starting Tiller v2.4.2 (tls=false)
GRPC listening on :44134
Probes listening on :44135
Storage driver is ConfigMap |
I'm actually seeing this with all clusters, not just GKE. I've also noticed Helm works fine from my OS X box to the same cluster. It seems to not work when in an alpine docker container. If I try the same process with a Debian container, it seems to work fine. The dockerfile in question that produces a Helm that doesn't work:
Can anyone else replicate this using the above image? The tiller server itself is not receiving any requests at all, however the local port does seem to be forwarded fine as I'm able to make a request using |
In this case, one is a brand new 4 node GKE cluster (n1-standard-1) and
another single node bare metal cluster with about 40 pods (where tiller is
running).
Tiller itself isn't receive a request at all when helm is run in an alpine
container, but works fine on macOS and within a debian container.
Those issues appear unrelated except for the reference to "context deadline
exceeded", which is a common error message in Go coming from the context
package https://godoc.org/context#DeadlineExceeded
…On Fri, 19 May 2017 at 00:16, Taylor Thomas ***@***.***> wrote:
@munnerz <https://github.com/munnerz> I am seeing a few
<kubernetes/kubernetes#27388> related
<kubernetes/kubernetes#39028> issues
<kubernetes/kubernetes#42164> in Kubernetes
that appears to be related to Docker, possibly due to the number of pods
running.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2409 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAMbPwkjDrtZyEz8k4C_vIb8wSe12nXyks5r7NFbgaJpZM4NSp7R>
.
|
Ok, I'll dig in to this bug next after I finish the one I am working on |
@munnerz I tried this with a non-GKE k8s and didn't have a problem. I don't have a GKE cluster to test against right now though |
Same error here on GKE:
|
I'm seeing the same issue on Azure. output:
No tiller logs were available after the call to
This is just on a 2-node cluster (one master, one node). I'll see if restarting the docker daemon on the node or just nuking the node from orbit will fix this issue. |
I met this error in my cluster too, non-GKE. Host:
docker Version:
Error:
K8s Env:
Others:
|
Same!
|
Encountering the same problems on AWS with a kops-1.6.2 install using RBACs
|
This issue was fixed in #2664 and is being shipped in v2.5.1. |
I'm still having this issue even when using the latest canary release, which also seems to have some argument parsing errors. Here's my kubectl output
Helm output from latest canary release:
|
Can you open a new issue for that, please? You might also want to test #2682. It sounds like a fix may or may not be underway for v2.5.1. |
this issue was fixed in v2.5.1. Closing! |
I can confirm that this is an issue in helm 2.5.0
Looks like we need to upgrade |
I can reproduce this with 2.5.1 too :( |
The issues I had above were caused by not having registered the node names in local DNS. The fix was to add |
I can confirm that this is a issue in helm 2.6.0 [debug] Original chart version: "" [debug] CHART PATH: /root/.helm/cache/archive/redis-0.9.0.tgz Error: context deadline exceeded k8s version is : |
Still an issue for me in 2.6.1 as well. |
Is anyone experiencing this error after installing Helm with HomeBrew? I'm seeing success when installing directly with the "From Script" method in the wiki. |
FYI @onlydole I made a comment above that might help give you more insight into the error. I don't think the difference between the CLI installed with homebrew and with the install script method will matter too much but it's definitely an interesting data point since homebrew builds from source whereas the install script uses the pre-built binary CircleCI releases every tag. If anyone is able to check and verify if homebrew vs. install script makes any difference, that would be helpful to know! |
I get this error too. My configuration is below K8s version: 1.7.7 helm version --host 10.245.112.10 --debug Client: &version.Version{SemVer:"v2.7.0", GitCommit:"08c1144f5eb3e3b636d9775617287cc26e53dba4", GitTreeState:"clean"} |
@veeresh1982 can you check the tiller pod’s status using kubectl? |
@veeresh1982 - If you installed via Homebrew, can you try deleting and installing directly like this? https://github.com/kubernetes/helm/blob/master/docs/install.md#from-script Curious to see if that binary is the issue. |
Ok I did below to resolve : I am going to try @philchristensen solution too and update. PREP on all master nodes:On all master nodes following link: #1455, updated /etc/hosts B)From my another remote ubuntu machine:10.67.141.140Logged in as root: root@veeresh-spinnakerbuild:/home/ubuntu# helm ls --debug root@veeresh-spinnakerbuild:/home/ubuntu# helm version --debug Followed this link: https://hackernoon.com/the-missing-ci-cd-kubernetes-component-helm-package-manager-1fe002aac680: to install a sample hello-world app using helm. cd /home/ubuntu/hello-world root@veeresh-spinnakerbuild:/home/ubuntu/hello-world# helm list --debug [debug] SERVER: "localhost:35190" NAME REVISION UPDATED STATUS CHART NAMESPACE Logged into cbu-dev master0 node |
NAMESPACE NAME READY STATUS RESTARTS AG |
Tested @philchristensen solution. It works too and with this no need to update /etc/hosts file on the master side. |
Just as another input to this: I was experiencing this issue intermittently when deploying. Whenever the context deadline error appeared, I could try again and usually by the second or third try it would work. We have a VPN directly into our cluster, so I tried adding the |
I'm seeing same issue Hi - I'm seeing
Also, tiller pod and service is running fine .
Any suggestion ! @munnerz I saw you had same issue! |
I resolved the above issue by setting NO_PROXY in alpine container. It seems because go libraries don't honor no_proxy . Example : |
@sefm Setting NO_PROXY environment variable in the shell of the Alpine Linux container doesn't work for me. helm still fails with the same error. 'kubectl get all' works on the other hand. |
I was running into this issue as well running Helm from Ubuntu 16.04LTS outside of a bare-metal cluster. I'm running Calico for networking and am exporting the svc cluster IPs so I can reach those directly. I wasn't seeing any kind of network connection created at all. Turns out, I had to add a port to Now I'm up and running.
This didn't work for me.
This works:
|
Still seeing the original issue with v2.8.2 running kubernetes in docker-for-mac edge.
Regular Nothing exceptional in the tiller logs:
|
Facing this with 2.8.2 as well.
No related logs within tiller container. cc @bacongobbler Thanks! |
Same issue here, with helm 2.8.2 on Azure AKS (Kubernetes 1.8.7) both using a brew installation and "from script" |
#3715 is the fix for this. Could one of you please try that branch and see if it works for you? Thanks! |
@bacongobbler awesome! The patch works for me!
|
Reproduce:
|
@edwardstudy This was fixed in helm version 2.9.0. You need to upgrade. |
@rabbitfang Hi, I upgraded helm but got same error:
|
Is it possible ./scripts/helm_mac is using a different client? You call both that script and helm directly in that example. |
Yes. ./scripts/helm_mac is 2.8.2 and helm command is 2.9.1. I use two versions in our script. |
That makes sense then... As it's been mentioned a few times in this thread now, the Helm 2.8.2 client does not have this fix. You need to use the 2.9 client if you want to use |
@bacongobbler Sorry, I missed that |
I tried to install latest Helm (v2.4.1) on my fresh created GKE cluster (with k8s 1.5.7). and it give out error whenever I install any chart. Attached are the logging of my previous command execution.
The text was updated successfully, but these errors were encountered: