as per comments in:
https://github.com/kubernetes/kubernetes/issues/34307
you should be able to edit /etc/kubernetes/manifests/kube-apiserver.json
to add arguments that you need in it. (like args from http://kubernetes.io/docs/admin/kube-apiserver/)
in K8s / Kubeadm 1.4 I can do this w/o issue. editing manifest will auto update kube-api server.
but in 1.6, ANY edit to manifest fails to restart api-server
TO VERIFY:
install k8s cluster via kubeadm guide: http://kubernetes.io/docs/getting-started-guides/kubeadm/
install weave: https://www.weave.works/docs/net/latest/kube-addon/
kubectl get nodes
note the correct ouput.netstat -tulpn
note kube-apiserv on 6443vi /etc/kubernetes/manifests/kube-apiserver.json
and then add a valid arg to kube-api server --service-node-port-range=30000-32767
. save and quit kubectl get nodes
will not work anymore.as i said, in kubeadm 1.4 this wasnt an issue, i can edit the manfiest and it works fine with the new manifest information...just not the latest (1.6)
What does kubeadm version
output?
@luxas
kubeadm version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.2074+a092d8e0f95f52", GitCommit:"a092d8e0f95f5200f7ae2cba45c75ab42da36537", GitTreeState:"clean", BuildDate:"2016-12-13T17:03:18Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
It seems adding ANY new argument breaks api-server.
default api stuff:
"command": [
"kube-apiserver",
"--insecure-bind-address=127.0.0.1",
"--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota",
"--service-cluster-ip-range=10.96.0.0/12",
"--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem",
"--client-ca-file=/etc/kubernetes/pki/ca.pem",
"--tls-cert-file=/etc/kubernetes/pki/apiserver.pem",
"--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem",
"--token-auth-file=/etc/kubernetes/pki/tokens.csv",
"--secure-port=6443",
"--allow-privileged",
"--advertise-address=172.30.2.157",
"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname",
"--anonymous-auth=false",
"--etcd-servers=http://127.0.0.1:2379"
if i add any valid arg found in documentation http://kubernetes.io/docs/admin/kube-apiserver/
--enable-swagger-ui
or --service-node-port-range
it will not work
```
"kube-apiserver",
"--insecure-bind-address=127.0.0.1",
"--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota",
"--service-cluster-ip-range=10.96.0.0/12",
"--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem",
"--client-ca-file=/etc/kubernetes/pki/ca.pem",
"--tls-cert-file=/etc/kubernetes/pki/apiserver.pem",
"--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem",
"--token-auth-file=/etc/kubernetes/pki/tokens.csv",
"--secure-port=6443",
"--enable-swagger-ui", <---- note the added stuff
"--allow-privileged",
"--advertise-address=172.30.2.157",
"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname",
"--anonymous-auth=false",
``` "--etcd-servers=http://127.0.0.1:2379"
Look at the kubelet logs.
And look at the kube-apiserver logs as well (docker ps -an 10
and grab some logs from the latest apiserver that run and failed)
@luxas, after edit, it doesnt seem to run the kube-api server (its missing)
it ran originally. after removing the added new arg, i can see kube-api server started up fresh.
p.s i added flag --profiling=false
to test
What about the kubelet logs?
@mikedanese where is kubelet logs?
docker ps and /var/logs doesnt seem to show any location for it.
You might need to use journalctl.
@mikedanese journalctl | grep kubelet | grep "kube-api"
doesnt output anything..
with journalctl | grep kubelet
is a very very very long output with nothing glaring standing out
journalctl -u kubelet
I'd love to see that log if possible.
ah sorry i messed up, after editing the manifest file, here is what i am seeing:
Jan 11 18:18:41 ip-172-30-2-157 kubelet[8999]: E0111 18:18:41.510541 8999 mirror_client.go:87] Failed deleting a mirror pod "kube-apiserver-ip-172-30-2-157_kube-system": dial tcp 172.30.2.157:6443: getsockopt: connection refused
journalctl -u kubelet
:
Jan 11 15:14:13 ip-172-30-2-157 kubelet[8999]: E0111 15:14:13.605044 8999 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-859wf_kube-system(2b87663f-d765-11e6-b673-0a23033b Jan 11 15:14:13 ip-172-30-2-157 kubelet[8999]: E0111 15:14:13.724695 8999 pod_workers.go:184] Error syncing pod 2b87663f-d765-11e6-b673-0a23033b1097, skipping: failed to "SetupNetwork" for "kube-dns-29 Jan 11 15:14:14 ip-172-30-2-157 kubelet[8999]: I0111 15:14:14.164545 8999 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2b87663f-d765-11e6-b673-0a23033b1097-de Jan 11 15:14:14 ip-172-30-2-157 kubelet[8999]: E0111 15:14:14.619062 8999 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-859wf_kube-system(2b87663f-d765-11e6-b673-0a23033b Jan 11 15:14:14 ip-172-30-2-157 kubelet[8999]: E0111 15:14:14.701389 8999 pod_workers.go:184] Error syncing pod 2b87663f-d765-11e6-b673-0a23033b1097, skipping: failed to "SetupNetwork" for "kube-dns-29 Jan 11 15:14:15 ip-172-30-2-157 kubelet[8999]: I0111 15:14:15.166763 8999 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2b87663f-d765-11e6-b673-0a23033b1097-de Jan 11 15:14:15 ip-172-30-2-157 kubelet[8999]: E0111 15:14:15.653194 8999 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-859wf_kube-system(2b87663f-d765-11e6-b673-0a23033b Jan 11 15:14:15 ip-172-30-2-157 kubelet[8999]: E0111 15:14:15.761279 8999 pod_workers.go:184] Error syncing pod 2b87663f-d765-11e6-b673-0a23033b1097, skipping: failed to "SetupNetwork" for "kube-dns-29 Jan 11 15:14:16 ip-172-30-2-157 kubelet[8999]: I0111 15:14:16.169911 8999 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2b87663f-d765-11e6-b673-0a23033b1097-de Jan 11 15:14:16 ip-172-30-2-157 kubelet[8999]: E0111 15:14:16.611148 8999 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-859wf_kube-system(2b87663f-d765-11e6-b673-0a23033b Jan 11 15:14:16 ip-172-30-2-157 kubelet[8999]: E0111 15:14:16.716970 8999 pod_workers.go:184] Error syncing pod 2b87663f-d765-11e6-b673-0a23033b1097, skipping: failed to "SetupNetwork" for "kube-dns-29 Jan 11 15:14:17 ip-172-30-2-157 kubelet[8999]: I0111 15:14:17.171945 8999 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2b87663f-d765-11e6-b673-0a23033b1097-de Jan 11 15:14:17 ip-172-30-2-157 kubelet[8999]: E0111 15:14:17.621197 8999 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-859wf_kube-system(2b87663f-d765-11e6-b673-0a23033b Jan 11 15:14:17 ip-172-30-2-157 kubelet[8999]: E0111 15:14:17.729026 8999 pod_workers.go:184] Error syncing pod 2b87663f-d765-11e6-b673-0a23033b1097, skipping: failed to "SetupNetwork" for "kube-dns-29 Jan 11 15:14:18 ip-172-30-2-157 kubelet[8999]: I0111 15:14:18.173834 8999 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2b87663f-d765-11e6-b673-0a23033b1097-de Jan 11 15:14:18 ip-172-30-2-157 kubelet[8999]: E0111 15:14:18.634848 8999 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-859wf_kube-system(2b87663f-d765-11e6-b673-0a23033b Jan 11 15:14:18 ip-172-30-2-157 kubelet[8999]: E0111 15:14:18.721124 8999 pod_workers.go:184] Error syncing pod 2b87663f-d765-11e6-b673-0a23033b1097, skipping: failed to "SetupNetwork" for "kube-dns-29 Jan 11 15:14:19 ip-172-30-2-157 kubelet[8999]: I0111 15:14:19.175767 8999 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2b87663f-d765-11e6-b673-0a23033b1097-de Jan 11 15:14:19 ip-172-30-2-157 kubelet[8999]: E0111 15:14:19.650356 8999 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-859wf_kube-system(2b87663f-d765-11e6-b673-0a23033b Jan 11 15:14:19 ip-172-30-2-157 kubelet[8999]: E0111 15:14:19.757357 8999 pod_workers.go:184] Error syncing pod 2b87663f-d765-11e6-b673-0a23033b1097, skipping: failed to "SetupNetwork" for "kube-dns-29 Jan 11 15:14:19 ip-172-30-2-157 kubelet[8999]: E0111 15:14:19.961206 8999 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d Jan 11 15:14:20 ip-172-30-2-157 kubelet[8999]: I0111 15:14:20.177357 8999 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2b87663f-d765-11e6-b673-0a23033b1097-de Jan 11 15:14:20 ip-172-30-2-157 kubelet[8999]: E0111 15:14:20.666202 8999 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-859wf_kube-system(2b87663f-d765-11e6-b673-0a23033b Jan 11 15:14:20 ip-172-30-2-157 kubelet[8999]: E0111 15:14:20.769542 8999 pod_workers.go:184] Error syncing pod 2b87663f-d765-11e6-b673-0a23033b1097, skipping: failed to "SetupNetwork" for "kube-dns-29 Jan 11 15:14:21 ip-172-30-2-157 kubelet[8999]: I0111 15:14:21.187915 8999 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2b87663f-d765-11e6-b673-0a23033b1097-de Jan 11 15:14:21 ip-172-30-2-157 kubelet[8999]: E0111 15:14:21.645951 8999 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-859wf_kube-system(2b87663f-d765-11e6-b673-0a23033b Jan 11 15:14:21 ip-172-30-2-157 kubelet[8999]: E0111 15:14:21.732792 8999 pod_workers.go:184] Error syncing pod 2b87663f-d765-11e6-b673-0a23033b1097, skipping: failed to "SetupNetwork" for "kube-dns-29 Jan 11 15:14:22 ip-172-30-2-157 kubelet[8999]: I0111 15:14:22.189792 8999 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2b87663f-d765-11e6-b673-0a23033b1097-de Jan 11 15:14:22 ip-172-30-2-157 kubelet[8999]: E0111 15:14:22.671855 8999 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-859wf_kube-system(2b87663f-d765-11e6-b673-0a23033b Jan 11 15:14:22 ip-172-30-2-157 kubelet[8999]: E0111 15:14:22.757364 8999 pod_workers.go:184] Error syncing pod 2b87663f-d765-11e6-b673-0a23033b1097, skipping: failed to "SetupNetwork" for "kube-dns-29 Jan 11 15:14:23 ip-172-30-2-157 kubelet[8999]: I0111 15:14:23.191666 8999 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2b87663f-d765-11e6-b673-0a23033b1097-de Jan 11 15:14:23 ip-172-30-2-157 kubelet[8999]: I0111 15:14:23.596545 8999 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2a52295a-d765-11e6-b673-0a23033b1097-de Jan 11 15:14:23 ip-172-30-2-157 kubelet[8999]: I0111 15:14:23.598566 8999 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2a52295a-d765-11e6-b673-0a23033b1097-cl Jan 11 15:14:23 ip-172-30-2-157 kubelet[8999]: E0111 15:14:23.684531 8999 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-859wf_kube-system(2b87663f-d765-11e6-b673-0a23033b Jan 11 15:14:23 ip-172-30-2-157 kubelet[8999]: E0111 15:14:23.788802 8999 pod_workers.go:184] Error syncing pod 2b87663f-d765-11e6-b673-0a23033b1097, skipping: failed to "SetupNetwork" for "kube-dns-29 Jan 11 15:14:24 ip-172-30-2-157 kubelet[8999]: I0111 15:14:24.295855 8999 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2b87663f-d765-11e6-b673-0a23033b1097-de Jan 11 15:14:24 ip-172-30-2-157 kubelet[8999]: E0111 15:14:24.685193 8999 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-859wf_kube-system(2b87663f-d765-11e6-b673-0a23033b Jan 11 15:14:24 ip-172-30-2-157 kubelet[8999]: E0111 15:14:24.761241 8999 pod_workers.go:184] Error syncing pod 2b87663f-d765-11e6-b673-0a23033b1097, skipping: failed to "SetupNetwork" for "kube-dns-29 Jan 11 15:14:25 ip-172-30-2-157 kubelet[8999]: I0111 15:14:25.299159 8999 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2b87663f-d765-11e6-b673-0a23033b1097-de Jan 11 15:14:25 ip-172-30-2-157 kubelet[8999]: E0111 15:14:25.685079 8999 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-859wf_kube-system(2b87663f-d765-11e6-b673-0a23033b Jan 11 15:14:25 ip-172-30-2-157 kubelet[8999]: E0111 15:14:25.777795 8999 pod_workers.go:184] Error syncing pod 2b87663f-d765-11e6-b673-0a23033b1097, skipping: failed to "SetupNetwork" for "kube-dns-29 Jan 11 15:14:26 ip-172-30-2-157 kubelet[8999]: I0111 15:14:26.302434 8999 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2b87663f-d765-11e6-b673-0a23033b1097-de Jan 11 15:14:26 ip-172-30-2-157 kubelet[8999]: E0111 15:14:26.688053 8999 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-859wf_kube-system(2b87663f-d765-11e6-b673-0a23033b Jan 11 15:14:26 ip-172-30-2-157 kubelet[8999]: E0111 15:14:26.793024 8999 pod_workers.go:184] Error syncing pod 2b87663f-d765-11e6-b673-0a23033b1097, skipping: failed to "SetupNetwork" for "kube-dns-29 Jan 11 15:14:27 ip-172-30-2-157 kubelet[8999]: I0111 15:14:27.304678 8999 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2b87663f-d765-11e6-b673-0a23033b1097-de Jan 11 15:14:27 ip-172-30-2-157 kubelet[8999]: E0111 15:14:27.694164 8999 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-859wf_kube-system(2b87663f-d765-11e6-b673-0a23033b Jan 11 15:14:27 ip-172-30-2-157 kubelet[8999]: E0111 15:14:27.792894 8999 pod_workers.go:184] Error syncing pod 2b87663f-d765-11e6-b673-0a23033b1097, skipping: failed to "SetupNetwork" for "kube-dns-29 Jan 11 15:14:28 ip-172-30-2-157 kubelet[8999]: I0111 15:14:28.308646 8999 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/2b87663f-d765-11e6-b673-0a23033b1097-de Jan 11 15:14:28 ip-172-30-2-157 kubelet[8999]: E0111 15:14:28.729602 8999 docker_manager.go:2201] Failed to setup network for pod "kube-dns-2924299975-859wf_kube-system(2b87663f-d765-11e6-b673-0a23033b Jan 11 15:14:28 ip-172-30-2-157 kubelet[8999]: E0111 15:14:28.829092 8999 pod_workers.go:184] Error syncing pod 2b87663f-d765-11e6-b673-0a23033b1097, skipping: failed to "SetupNetwork" for "kube-dns-29 lines 1-51
@luxas @mikedanese it seems its something with the 1.6 changes:
https://www.opentest.co/share/aa1d5390d82f11e6a7cb6f2926412bab
note in above video: same AMI (that previously had 1.5 kubeadm) one gets upgraded, the other doesnt
I have exactly the same problem. I am using kubeadm 1.5.1
I'm hitting the same with kubeadm 1.5.1. It looks like this is a duplicate of #25 but that was closed because kubeadm init
added command-line options to address the requester's particular issue. I don't think that's a scalable solution for all the edits you may want to put in kube-apiserver though.
@ravishivt yea seems like the PR from https://github.com/kubernetes/kubernetes/pull/34719
is only adding a few flags @ startup, and not full range of args from kube-api
Please reopen if this happens with v1.6 as well
Dear all, does kubelet not support swagger-ui since v1.7?
I added --enable-swagger-ui=true in kubelet startup args, and got unknow flag: --enable-swagger-ui
After check kubelet usage, --enable-swagger-ui disappear.
@gogeof That's totally unrelated to this issue.
kubelet has never had that flag, it's the API server that has it: https://kubernetes.io/docs/admin/kube-apiserver/
@luxas Yes, my fault, --enable-swagger-ui is apiserver's API. Thanks a lot.
@luxas this is still happening for me with kubeadm and Kubernetes 1.10
@luxas Same issue with 1.9
@luxas I am having the same issue w/ Kubernetes 1.10.1
Can you please re-open this or give us some pointers to work around this.
Thanks
Never mind, this seems to be more related to 'in place editing of manifests' https://github.com/kubernetes/kubernetes/issues/48219
Same here...
Never mind, this seems to be more related to 'in place editing of manifests' kubernetes/kubernetes#48219
Right, this is not a kubeadm issue
I am on Kubernetes 1.11 and I have the same issue or worse - I mean any change (even of timestamp) on /etc/kubernetes/manifests/kube-apiserver.yaml causes the api server to dye instantly and never come up again without a kubeadm init. Editing in place - VIM or nano - or editing elsewhere and copy/move over - or just "touch" - same destructive effect. Now is there a way to configure the "defaults" used by kubeadm init when re-creating this file? Thanks
@debugnetiq1 Can you check journalctl -u kubelet -f
on any potential errors why api-server never comes up again? I am using kubeadm 1.11 and it is able to restart apiserver when modified the /etc/kubernetes/manifests/kube-apiserver.yaml using vim.
I have same issue with kubernetes 1.9 created by kops.
Running Kubernetes 1.11
modifying kube-apiserver.yaml manifest for OIDC and the container won't come up - nothing logged about creating the container in the kubelet
modified the command section to include:
- --oidc-client-id="spn:APISERVER_APPLICATION_ID"
- --oidc-issuer-url="https://sts.windows.net/TENANT_ID/"
- --oidc-username-claim="sub"
kubelet log just spams a bunch of these lines:
Aug 30 15:26:20 kube-master kubelet[3130]: W0830 15:26:20.692585 3130 status_manager.go:482] Failed to get status for pod "kube-apiserver-kube-master_kube-system(8346e66f1d33d589bf50e73662958f6b)": Get https://192.168.1.91:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-kube-master: dial tcp 192.168.1.91:6443: connect: connection refused
container "startedAt": "2018-08-30T07:31:11Z"
the pod exists and never goes down but its not bringing up the api container in docker
1b94381a720c k8s.gcr.io/pause:3.1 "/pause" 22 seconds ago Up 20 seconds k8s_POD_kube-apiserver-kube-master_kube-system_b64ac616fbafbe2805e8ad4b65fa8c91_0
Modified with nano, also tried to copy in from elsewhere - this doesn't matter though...
If I comment out the commands I added earlier it comes back up - the manifest is still edited
further investigation of the failed api container:
ubuntu@kube-master:/etc/kubernetes/manifests$ sudo docker logs $(sudo docker ps -a |grep k8s_kube-apiserver| awk '{print $1}')
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I0903 02:34:59.153592 1 server.go:703] external host was not specified, using 192.168.1.91
I0903 02:34:59.153921 1 server.go:145] Version: v1.11.0
Error: invalid authentication config: parse "https://sts.windows.net/ID/": first path segment in URL cannot contain colon
///
error: invalid authentication config: parse "https://sts.windows.net/ID/": first path segment in URL cannot contain colon
fixed by removing the ""
Most helpful comment
@luxas this is still happening for me with kubeadm and Kubernetes 1.10