Helm: kube-system tiller-deploy-f9b8476d-zkln4 always pending by failed scheduling

Created on 6 Jun 2018  ·  3Comments  ·  Source: helm/helm


how to fix this?

Output of helm version:
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.):
Minikube

kubectl -n kube-system describe pod tiller-deploy-f9b8476d-zkln4

Name: tiller-deploy-f9b8476d-zkln4
Namespace: kube-system
Node:
Labels: app=helm
name=tiller
pod-template-hash=95640328
Annotations:
Status: Pending
IP:
Controlled By: ReplicaSet/tiller-deploy-f9b8476d
Containers:
tiller:
Image: gcr.io/kubernetes-helm/tiller:v2.9.1
Ports: 44134/TCP, 44135/TCP
Host Ports: 0/TCP, 0/TCP
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: kube-system
TILLER_HISTORY_MAX: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-skl9j (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-skl9j:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-skl9j
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m (x91 over 28m) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

questiosupport

Most helpful comment

allow master to run pods:
kubectl taint nodes --all node-role.kubernetes.io/master-

All 3 comments

Not sure but your node is indicating that it is unreachable therefore it cannot schedule pods.

allow master to run pods:
kubectl taint nodes --all node-role.kubernetes.io/master-

thanks so much ! It works

Was this page helpful?
0 / 5 - 0 ratings