ãã°ã¬ããŒã
kubeadmããŒãžã§ã³ïŒ kubeadm version
ïŒïŒ
kubeadmããŒãžã§ã³ïŒïŒversion.Info {ã¡ãžã£ãŒïŒ "1"ããã€ããŒïŒ "10"ãGitVersionïŒ "v1.10.0"ãGitCommitïŒ "fc32d2f3698e36b93322a3465f63a14e9f0eaead"ãGitTreeStateïŒ "clean"ãBuildDateïŒ "2018-03-26T16ïŒ44ïŒ 10Z "ãGoVersionïŒ" go1.9.3 "ãã³ã³ãã€ã©ïŒ" gc "ããã©ãããã©ãŒã ïŒ" linux / amd64 "}
ç°å¢ïŒ
kubectl version
ïŒïŒã¯ã©ã€ã¢ã³ãããŒãžã§ã³ïŒversion.Info {MajorïŒ "1"ãMinorïŒ "9"ãGitVersionïŒ "v1.9.6"ãGitCommitïŒ "9f8ebd171479bec0ada837d7ee641dec2f8c6dd1"ãGitTreeStateïŒ "clean"ãBuildDateïŒ "2018-03-21T15ïŒ21ïŒ 50Z "ãGoVersionïŒ" go1.9.3 "ãã³ã³ãã€ã©ïŒ" gc "ããã©ãããã©ãŒã ïŒ" linux / amd64 "}
ãµãŒããŒããŒãžã§ã³ïŒversion.Info {MajorïŒ "1"ãMinorïŒ "9"ãGitVersionïŒ "v1.9.6"ãGitCommitïŒ "9f8ebd171479bec0ada837d7ee641dec2f8c6dd1"ãGitTreeStateïŒ "clean"ãBuildDateïŒ "2018-03-21T15ïŒ13ïŒ 31Z "ãGoVersionïŒ" go1.9.3 "ãã³ã³ãã€ã©ïŒ" gc "ããã©ãããã©ãŒã ïŒ" linux / amd64 "}
ã¹ã±ãŒã«ãŠã§ã€ãã¢ã¡ã¿ã«C2S
Ubuntu XenialïŒ16.04 LTSïŒïŒGNU / Linux 4.4.122-mainline-rev1 x86_64ïŒ
uname -a
ïŒïŒLinux amd64-master-1 4.4.122-mainline-rev1ïŒ1 SMP Sun Mar 18 10:44:19 UTC 2018 x86_64 x86_64 x86_64 GNU / Linux
1.9.6ãã1.10.0ã«ã¢ããã°ã¬ãŒãããããšãããšã次ã®ãšã©ãŒãçºçããŸãã
kubeadm upgrade apply v1.10.0
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to change the cluster version to "v1.10.0"
[upgrade/versions] Cluster version: v1.9.6
[upgrade/versions] kubeadm version: v1.10.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.0"...
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests411909119/etcd.yaml"
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [arm-master-1] and IPs [10.1.244.57]
[certificates] Generated etcd/healthcheck-client certificate and key.
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests180476754/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/apply] FATAL: fatal error when trying to upgrade the etcd cluster: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [timed out waiting for the condition], rolled the state back to pre-upgrade state
ã¢ããã°ã¬ãŒãã®æå
1.9.6ããã±ãŒãžãã€ã³ã¹ããŒã«ãã1.9.6ã¯ã©ã¹ã¿ãŒãåæåããŸãã
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list
apt-get update -qq
apt-get install -qy kubectl=1.9.6-00
apt-get install -qy kubelet=1.9.6-00
apt-get install -qy kubeadm=1.9.6-00
https://github.com/kubernetes/kubernetes/issues/61764ã§å ±åãããŠããããã«ãkubeadm-configãç·šéããfeatureGatesãæååãããããã«å€æŽã
kubectl -n kube-system edit cm kubeadm-config
....
featureGates: {}
....
kubeadm 1.10.0ãããŠã³ããŒããã kubeadm upgrade plan
ãškubeadm upgrade apply v1.10.0
ãŸãã
ãã®ãã°ã®ããŒã«ã«ã§ã®åçŸã«åãçµãã§ããŸãã
ããã10ååè©Šè¡ããåŸãæçµçã«æ©èœããŸãã
ãããç§ã®etcdãããã§ã¹ãå·®åã§ã
`` ` root @ vagrantïŒãïŒdiff /etc/kubernetes/manifests/etcd.yaml /etc/kubernetes/tmp/kubeadm-backup-manifests858209931/etcd.yaml
16,17c16,17
<---- listen-client-urls = https://127.0.0.1ïŒ2379
- --listen-client-urls=http://127.0.0.1:2379 - --advertise-client-urls=http://127.0.0.1:2379
19,27c19
<--- key-file = / etc / kubernetes / pki / etcd / server.key
<-trusted-ca-file = / etc / kubernetes / pki / etcd / ca.crt
<--- peer-cert-file = / etc / kubernetes / pki / etcd / peer.crt
<---- peer-key-file = / etc / kubernetes / pki / etcd / peer.key
<---- client-cert-auth = true
<---- peer-client-cert-auth = true
<---- cert-file = / etc / kubernetes / pki / etcd / server.crt
<---- peer-trusted-ca-file = / etc / kubernetes / pki / etcd / ca.crt<ç»åïŒgcr.io/google_containers/etcd-amd64ïŒ3.1.12
image: gcr.io/google_containers/etcd-amd64:3.1.11
29,35d20
<execïŒ
<ã³ãã³ãïŒ
<-/ bin / sh
<-ec
<-ETCDCTL_API = 3 etcdctl --endpoints = 127.0.0.1ïŒ2379 --cacert = / etc / kubernetes / pki / etcd / ca.crt
<-cert = / etc / kubernetes / pki / etcd / healthcheck-client.crt --key = / etc / kubernetes / pki / etcd / healthcheck-client.key
<fooãååŸ
36a22,26
httpGetïŒ
ãã¹ãïŒ127.0.0.1
ãã¹ïŒ/ health
ããŒãïŒ2379
ã¹ããŒã ïŒHTTP
43,45c33
<ååïŒetcd-data
<-mountPathïŒ/ etc / kubernetes / pki / etcd<ååïŒetcd-certs
name: etcd
51,55c39
<ååïŒetcd-data
<-hostPathïŒ
<ãã¹ïŒ/ etc / kubernetes / pki / etcd
<ã¿ã€ãïŒDirectoryOrCreate<ååïŒetcd-certs
name: etcd
root @ vagrantïŒãïŒls / etc / kubernetes / pki / etcd
ca.crt ca.key healthcheck-client.crt healthcheck-client.key peer.crt peer.key server.crt server.key```
Ubuntu 17.10 Vagrantã®1.9.6ã¯ã©ã¹ã¿ãŒïŒ
root<strong i="6">@vagrant</strong>:/vagrant# 1.10_kubernetes/server/bin/kubeadm upgrade apply v1.10.0
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to change the cluster version to "v1.10.0"
[upgrade/versions] Cluster version: v1.9.6
[upgrade/versions] kubeadm version: v1.10.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.0"...
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests262738652/etcd.yaml"
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [vagrant] and IPs [10.0.2.15]
[certificates] Generated etcd/healthcheck-client certificate and key.
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests858209931/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Error getting Pods with label selector "component=etcd" [the server was unable to return a response in the time allotted, but may still be processing the request (get pods)]
[apiclient] Error getting Pods with label selector "component=etcd" [Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Detcd: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""]
[apiclient] Error getting Pods with label selector "component=etcd" [Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Detcd: net/http: TLS handshake timeout]
[apiclient] Error getting Pods with label selector "component=etcd" [the server was unable to return a response in the time allotted, but may still be processing the request (get pods)]
[apiclient] Error getting Pods with label selector "component=etcd" [Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Detcd: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""]
[upgrade/apply] FATAL: fatal error when trying to upgrade the etcd cluster: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [timed out waiting for the condition], rolled the state back to pre-upgrade state
ããã¯ç§ã®åçŸç°å¢ã§ãïŒ https ïŒ
ããŒãã¹ãã©ããã®æ¬¡ã®è¡ã1.9.6-00
ã«å€æŽããŸãïŒ //github.com/stealthybox/vagrant-kubeadm-testing/blob/9d4493e990c9bd742107b317641267c3ef3640cd/Vagrantfile#L18 -L20
次ã«ã1.10ãµãŒããŒãã€ããªããªããžããªã«ããŠã³ããŒããããšãã²ã¹ãã§/vagrant
ã§å©çšã§ããããã«ãªããŸãã
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#server -binaries
kubelet etcdé¢é£ãã°ïŒ
root<strong i="6">@vagrant</strong>:~# journalctl -xefu kubelet | grep -i etcd
Mar 28 16:32:07 vagrant kubelet[14676]: W0328 16:32:07.808776 14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(7278f85057e8bf5cb81c9f96d3b25320)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:32:07 vagrant kubelet[14676]: I0328 16:32:07.880412 14676 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd" (UniqueName: "kubernetes.io/host-path/7278f85057e8bf5cb81c9f96d3b25320-etcd") pod "etcd-vagrant" (UID: "7278f85057e8bf5cb81c9f96d3b25320")
Mar 28 16:34:27 vagrant kubelet[14676]: W0328 16:34:27.472534 14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(7278f85057e8bf5cb81c9f96d3b25320)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:57:33 vagrant kubelet[14676]: W0328 16:57:33.683648 14676 kubelet.go:1597] Deleting mirror pod "etcd-vagrant_kube-system(122348c3-32a6-11e8-8dc5-080027d6be16)" because it is outdated
Mar 28 16:57:33 vagrant kubelet[14676]: I0328 16:57:33.725564 14676 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-certs") pod "etcd-vagrant" (UID: "37936d2107e31b457cada6c2433469f1")
Mar 28 16:57:33 vagrant kubelet[14676]: I0328 16:57:33.725637 14676 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-data") pod "etcd-vagrant" (UID: "37936d2107e31b457cada6c2433469f1")
Mar 28 16:57:35 vagrant kubelet[14676]: E0328 16:57:35.484901 14676 kuberuntime_container.go:66] Can't make a ref to pod "etcd-vagrant_kube-system(7278f85057e8bf5cb81c9f96d3b25320)", container etcd: selfLink was empty, can't make reference
Mar 28 16:57:35 vagrant kubelet[14676]: I0328 16:57:35.889458 14676 reconciler.go:191] operationExecutor.UnmountVolume started for volume "etcd" (UniqueName: "kubernetes.io/host-path/7278f85057e8bf5cb81c9f96d3b25320-etcd") pod "7278f85057e8bf5cb81c9f96d3b25320" (UID: "7278f85057e8bf5cb81c9f96d3b25320")
Mar 28 16:57:35 vagrant kubelet[14676]: I0328 16:57:35.889595 14676 operation_generator.go:643] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7278f85057e8bf5cb81c9f96d3b25320-etcd" (OuterVolumeSpecName: "etcd") pod "7278f85057e8bf5cb81c9f96d3b25320" (UID: "7278f85057e8bf5cb81c9f96d3b25320"). InnerVolumeSpecName "etcd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 28 16:57:35 vagrant kubelet[14676]: I0328 16:57:35.989892 14676 reconciler.go:297] Volume detached for volume "etcd" (UniqueName: "kubernetes.io/host-path/7278f85057e8bf5cb81c9f96d3b25320-etcd") on node "vagrant" DevicePath ""
Mar 28 16:58:03 vagrant kubelet[14676]: E0328 16:58:03.688878 14676 mirror_client.go:88] Failed deleting a mirror pod "etcd-vagrant_kube-system": Timeout: request did not complete within allowed duration
Mar 28 16:58:03 vagrant kubelet[14676]: E0328 16:58:03.841447 14676 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-vagrant.152023ff626cfbc5", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-vagrant", UID:"37936d2107e31b457cada6c2433469f1", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"SuccessfulMountVolume", Message:"MountVolume.SetUp succeeded for volume \"etcd-certs\" ", Source:v1.EventSource{Component:"kubelet", Host:"vagrant"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f713e59c5, ext:1534226953099, loc:(*time.Location)(0x5859e60)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f713e59c5, ext:1534226953099, loc:(*time.Location)(0x5859e60)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Timeout: request did not complete within allowed duration' (will not retry!)
Mar 28 16:58:33 vagrant kubelet[14676]: E0328 16:58:33.844276 14676 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-vagrant.152023ff626cfb82", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-vagrant", UID:"37936d2107e31b457cada6c2433469f1", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"SuccessfulMountVolume", Message:"MountVolume.SetUp succeeded for volume \"etcd-data\" ", Source:v1.EventSource{Component:"kubelet", Host:"vagrant"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f713e5982, ext:1534226953033, loc:(*time.Location)(0x5859e60)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f713e5982, ext:1534226953033, loc:(*time.Location)(0x5859e60)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Timeout: request did not complete within allowed duration' (will not retry!)
Mar 28 16:59:03 vagrant kubelet[14676]: E0328 16:59:03.692450 14676 kubelet.go:1612] Failed creating a mirror pod for "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": the server was unable to return a response in the time allotted, but may still be processing the request (post pods)
Mar 28 16:59:03 vagrant kubelet[14676]: E0328 16:59:03.848007 14676 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-vagrant.152023ff641f915f", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-vagrant", UID:"7278f85057e8bf5cb81c9f96d3b25320", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{etcd}"}, Reason:"Killing", Message:"Killing container with id docker://etcd:Need to kill Pod", Source:v1.EventSource{Component:"kubelet", Host:"vagrant"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f72f0ef5f, ext:1534255433999, loc:(*time.Location)(0x5859e60)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f72f0ef5f, ext:1534255433999, loc:(*time.Location)(0x5859e60)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Timeout: request did not complete within allowed duration' (will not retry!)
Mar 28 16:59:14 vagrant kubelet[14676]: W0328 16:59:14.472661 14676 kubelet.go:1597] Deleting mirror pod "etcd-vagrant_kube-system(122348c3-32a6-11e8-8dc5-080027d6be16)" because it is outdated
Mar 28 16:59:14 vagrant kubelet[14676]: W0328 16:59:14.473138 14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:59:14 vagrant kubelet[14676]: E0328 16:59:14.473190 14676 mirror_client.go:88] Failed deleting a mirror pod "etcd-vagrant_kube-system": Delete https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:59:14 vagrant kubelet[14676]: E0328 16:59:14.473658 14676 kubelet.go:1612] Failed creating a mirror pod for "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Post https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:59:15 vagrant kubelet[14676]: W0328 16:59:15.481336 14676 kubelet.go:1597] Deleting mirror pod "etcd-vagrant_kube-system(122348c3-32a6-11e8-8dc5-080027d6be16)" because it is outdated
Mar 28 16:59:15 vagrant kubelet[14676]: E0328 16:59:15.483705 14676 mirror_client.go:88] Failed deleting a mirror pod "etcd-vagrant_kube-system": Delete https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:59:15 vagrant kubelet[14676]: E0328 16:59:15.497391 14676 kubelet.go:1612] Failed creating a mirror pod for "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Post https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 17:00:34 vagrant kubelet[14676]: W0328 17:00:34.475851 14676 kubelet.go:1597] Deleting mirror pod "etcd-vagrant_kube-system(122348c3-32a6-11e8-8dc5-080027d6be16)" because it is outdated
Mar 28 17:01:07 vagrant kubelet[14676]: W0328 17:01:07.720076 14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: http2: server sent GOAWAY and closed the connection; LastStreamID=47, ErrCode=NO_ERROR, debug=""
Mar 28 17:01:07 vagrant kubelet[14676]: E0328 17:01:07.720107 14676 mirror_client.go:88] Failed deleting a mirror pod "etcd-vagrant_kube-system": Delete https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: http2: server sent GOAWAY and closed the connection; LastStreamID=47, ErrCode=NO_ERROR, debug=""; some request body already written
Mar 28 17:01:07 vagrant kubelet[14676]: E0328 17:01:07.725335 14676 kubelet.go:1612] Failed creating a mirror pod for "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Post https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 17:01:07 vagrant kubelet[14676]: I0328 17:01:07.728709 14676 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd" (UniqueName: "kubernetes.io/host-path/7278f85057e8bf5cb81c9f96d3b25320-etcd") pod "etcd-vagrant" (UID: "7278f85057e8bf5cb81c9f96d3b25320")
Mar 28 17:01:07 vagrant kubelet[14676]: W0328 17:01:07.734475 14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 17:01:07 vagrant kubelet[14676]: W0328 17:01:07.740642 14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(7278f85057e8bf5cb81c9f96d3b25320)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 17:01:09 vagrant kubelet[14676]: E0328 17:01:09.484412 14676 kuberuntime_container.go:66] Can't make a ref to pod "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)", container etcd: selfLink was empty, can't make reference
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.848794 14676 reconciler.go:191] operationExecutor.UnmountVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-certs") pod "37936d2107e31b457cada6c2433469f1" (UID: "37936d2107e31b457cada6c2433469f1")
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.849282 14676 reconciler.go:191] operationExecutor.UnmountVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-data") pod "37936d2107e31b457cada6c2433469f1" (UID: "37936d2107e31b457cada6c2433469f1")
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.849571 14676 operation_generator.go:643] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-data" (OuterVolumeSpecName: "etcd-data") pod "37936d2107e31b457cada6c2433469f1" (UID: "37936d2107e31b457cada6c2433469f1"). InnerVolumeSpecName "etcd-data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.849503 14676 operation_generator.go:643] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-certs" (OuterVolumeSpecName: "etcd-certs") pod "37936d2107e31b457cada6c2433469f1" (UID: "37936d2107e31b457cada6c2433469f1"). InnerVolumeSpecName "etcd-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.949925 14676 reconciler.go:297] Volume detached for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-certs") on node "vagrant" DevicePath ""
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.949975 14676 reconciler.go:297] Volume detached for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-data") on node "vagrant" DevicePath ""
çŸåšã®åé¿çã¯ãã¢ããã°ã¬ãŒããåè©Šè¡ãç¶ããããšã§ãããããæç¹ã§æåããŸãã
@stealthybox etcdã³ã³ããã®Dockerãããã°ãååŸããŸããïŒ ãŸãã grep -i etcd
ã¯ãkubeletåºåã®äžéšããã¹ã¯ããŠããå¯èœæ§ããããŸããããšãã°ãã³ã³ããåãå«ãŸããŠããªããé¢é£æ§ã®ãããšã©ãŒã¡ãã»ãŒãžãªã©ã§ãã
ãã®ãã°ã«é¢é£ããå¥ã®å¥åŠãªãšããžã±ãŒã¹ã«ééããŸããã kubeadmã¢ããã°ã¬ãŒãã¯ãæ°ããetcdã€ã¡ãŒãžããã«ãããæ°ããéçãããããããã€ãããåã«ãetcdã¢ããã°ã¬ãŒããå®äºããããšã瀺ããŸããã ããã«ãããåŸã®ã¹ãããã§ã¢ããã°ã¬ãŒããã¿ã€ã ã¢ãŠãããã¢ããã°ã¬ãŒãã®ããŒã«ããã¯ã倱æããŸãã ããã«ãããã¯ã©ã¹ã¿ãŒã¯å£ããç¶æ ã®ãŸãŸã«ãªããŸãã ã¯ã©ã¹ã¿ãå埩ããã«ã¯ãå ã®etcdéçããããããã§ã¹ãã埩å ããå¿ èŠããããŸãã
ãããããç§ãããã«éã蟌ããããŠããŸãã ã¯ã©ã¹ã¿ãå®å šã«ããŠã³ããŠããŸãã 誰ãããã®ç¶æ ããæå©ããæ¹æ³ã«ã€ããŠããã€ãã®æ瀺ãå ±æã§ããŸããïŒ
@detiberã説æããããã«ã2åç®ã®ã¢ããã°ã¬ãŒãã®è©Šã¿ã§ããã«ããŸããããéåžžã«èŠçã§ããã ïŒæ³£ãïŒ
/ etc / kubernetes / tmpã§ããã¯ã¢ããããããã®ãèŠã€ããetcdãåå ã§ããå¯èœæ§ããããšæããã®ã§ããããã§ã¹ããã©ã«ããŒã®æ°ãããããã§ã¹ãã«å€ããããã§ã¹ããã³ããŒããŸããã ãã®æç¹ã§ãç§ã¯å€±ããã®ã¯äœããããŸããã§ããããªããªããç§ã¯ã¯ã©ã¹ã¿ãŒã®å¶åŸ¡ãå®å šã«å€±ã£ãããã§ãã ãã®åŸãæ£ç¢ºã«ã¯èŠããŠããŸãããããã·ã³å šäœãåèµ·åããåŸã§ãã¹ãŠã®ãã®ãv1.9.6ã«ããŠã³ã°ã¬ãŒããããšæããŸãã æçµçã«ãç§ã¯ã¯ã©ã¹ã¿ãŒã®å¶åŸ¡ãååŸããv1.10.0ãåã³æ··ä¹±ãããåæ©ã倱ããŸããã ãŸã£ããé¢çœããªãã£ã...
etcdéçããããããã§ã¹ãã/etc/kubernetes/tmp
ããããŒã«ããã¯ããå Žåã1.10ã®æ°ããTLSæ§æã®ãããapiserverãããã§ã¹ãã1.9ããŒãžã§ã³ã«ããŒã«ããã¯ããããšãéèŠã§ãã
^ etcdã¢ããã°ã¬ãŒãã¯æ®ãã®ã³ã³ãããŒã«ãã¬ãŒã³ã®ã¢ããã°ã¬ãŒãããããã¯ãããšç§ã¯ä¿¡ããŠããã®ã§ããããããããè¡ãå¿ èŠã¯ãªãã§ãããã
ã¢ããã°ã¬ãŒãã倱æããå Žåãetcdãããã§ã¹ãã®ã¿ãããŒã«ããã¯ãããªãããã§ãããã以å€ã¯ãã¹ãŠåé¡ãããŸããã ããã¯ã¢ãããããã§ã¹ãã移åããŠkubeletãåèµ·åãããšããã¹ãŠãæ£åžžã«æ»ããŸãã
åãã¿ã€ã ã¢ãŠãã®åé¡ã«çŽé¢ããkubeadmã¯kube-apiservãããã§ã¹ãã1.9.6ã«ããŒã«ããã¯ããŸããããetcdãããã§ã¹ãããã®ãŸãŸã«ãïŒèªã¿åãïŒTLSãæå¹ã«ããå ŽåïŒãæããã«apiservãæšãã«å€±æãããã¹ã¿ãŒããŒããäºå®äžå£ããŸããã å¥ã®åé¡ã¬ããŒãã®è¯ãåè£ã ãšæããŸãã
@dvdmuckle @codepainters ãæ®å¿µãªãããããŒã«ããã¯ãæåãããã©ããã¯ãã©ã®ã³ã³ããŒãã³ãã競åç¶æ ïŒetcdãŸãã¯apiãµãŒããŒïŒã«ããããããã«ãã£ãŠç°ãªããŸãã 競åç¶æ ã®ä¿®æ£ãèŠã€ããŸããããkubeadmã®ã¢ããã°ã¬ãŒããå®å šã«å£ããŠããŸãã ç§ã¯@stealthyboxãš
@codepaintersåãåé¡ã ãšæããŸãã
ãã®åé¡ã®åå ãšãªãæ ¹æ¬çãªåé¡ãããã€ããããŸãã
ãã®çµæãã¢ããã°ã¬ãŒãã¯çŸåšãetcdãããã®ãããã¹ããŒã¿ã¹ã®æŽæ°ãçºçããå Žåã«ã®ã¿æåããŸããããã«ãããkubeletãetcdã®æ°ããéçãããã§ã¹ããååŸããåã«ããã·ã¥ãå€æŽãããŸãã ããã«ãã¢ããã°ã¬ãŒãããŒã«ãapiserverãããã§ã¹ããæŽæ°ããåã«apiã«ã¯ãšãªãå®è¡ããŠããå ŽåãapiãµãŒããŒã¯apiserverã¢ããã°ã¬ãŒãã®æåã®éšåã§äœ¿çšå¯èœãªãŸãŸã§ããå¿ èŠããããŸãã
@detiberãšç§ã¯ãã¢ããã°ã¬ãŒãããã»ã¹ã«å ããå¿
èŠã®ããå€æŽã«ã€ããŠè©±ãåãããã«é»è©±ã«åºãŸããã
1.10.xããããªãªãŒã¹ã§ã¯ããã®ãã°ã«å¯ŸããŠ3ã€ã®ä¿®æ£ãå®è£
ããäºå®ã§ãã
etcdTLSãã¢ããã°ã¬ãŒãããåé€ããŸãã
çŸåšã®ã¢ããã°ã¬ãŒãã«ãŒãã¯ãã³ã³ããŒãã³ãããšã«ã·ãªã¢ã«æ¹åŒã§ãããå€æŽãè¡ããŸãã
ã³ã³ããŒãã³ãã®ã¢ããã°ã¬ãŒãã«ã¯ãäŸåããã³ã³ããŒãã³ãæ§æã«é¢ããç¥èããããŸããã
ã¢ããã°ã¬ãŒãã確èªããã«ã¯ããããã®ã¹ããŒã¿ã¹ã確èªããããã«APIServerã䜿çšå¯èœã§ããå¿
èŠããããŸãã
Etcd TLSã§ã¯ãetcd + apiserverã®æ§æãçµã¿åãããŠå€æŽããå¿
èŠãããããã®å¥çŽãç ŽãããŸãã
ããã¯ããã®åé¡ãä¿®æ£ããããã®æå°éã®å®è¡å¯èœãªå€æŽã§ãããã¢ããã°ã¬ãŒããããã¯ã©ã¹ã¿ãŒã«ã¯å®å
šã§ãªãetcdãæ®ããŸãã
ãããã¹ããŒã¿ã¹å€æŽæã®ãã©ãŒãããããã·ã¥ç«¶åç¶æ
ãä¿®æ£ããŸããã
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/upgrade/staticpods.go#L189ã
etcdãã©ã°ãšapiserverãã©ã°ã®äºææ§ãåæãšããŠãã¢ããã°ã¬ãŒããæ£ããè¡ãããããã«ãªããŸããã
ç¹ã«å¥ã®ãã§ãŒãºã§TLSãã¢ããã°ã¬ãŒãããŸãã
EtcdãšAPIServerã¯äžç·ã«ã¢ããã°ã¬ãŒãããå¿
èŠããããŸãã
kubeadm alpha phase ensure-etcd-tls
?ã
ãã®ãã§ãŒãºã¯ãã¯ã©ã¹ã¿ãŒã®ã¢ããã°ã¬ãŒããšã¯é¢ä¿ãªãå®è¡å¯èœã§ããå¿
èŠããããŸãã
ã¯ã©ã¹ã¿ãŒã®ã¢ããã°ã¬ãŒãäžã¯ããã¹ãŠã®ã³ã³ããŒãã³ããæŽæ°ããåã«ãã®ãã§ãŒãºãå®è¡ããå¿
èŠããããŸãã
1.11ã®å Žåã次ã®ããšãè¡ããŸãã
å¥ã®æ¹æ³ïŒCRIã䜿çšããŠãããæ
å ±ãååŸããŸãïŒ crictl
ã䜿çšããŠãã¢ãå®è¡ã§ããŸãïŒã
èŠåïŒdockershimããã³å Žåã«ãã£ãŠã¯ä»ã®ã³ã³ãããŒã©ã³ã¿ã€ã ã®CRIã¯ãçŸåšãCRIã®é倧ãªå€æŽã«å¯Ÿããäžäœäºææ§ããµããŒãããŠããŸããã
@detiberç§ãã¡ã話ããŠãã競åç¶æ ã説æããŠãããããã§ããïŒ ç§ã¯kubeadmã®å éšã«ããŸã詳ãããããŸããããããã§ãé¢çœããã§ãã
@codepaintersã¯https://github.com/kubernetes/kubeadm/issues/740#issuecomment-377263347ãåç §ããŠ
åèãŸã§ã«-1.9.3ããã®ã¢ããã°ã¬ãŒããšåãåé¡/åé¡
äœåºŠãåè©Šè¡ãããšããåé¿çãè©ŠããŸããã æåŸã«ãAPIãµãŒããŒã§ç«¶åç¶æ
ã«ãªããã¢ããã°ã¬ãŒããããŒã«ããã¯ã§ããŸããã§ããã
@stealthybox thxãæåã«èªãã ãšãã«
åãåé¡ãçºçããŠããŸãã[ãšã©ãŒAPIServerHealth]ïŒAPIãµãŒããŒãæ£åžžã§ã¯ãããŸããã / healthzã¯ãokããè¿ããŸããã§ãã
[ãšã©ãŒMasterNodesReady]ïŒã¯ã©ã¹ã¿ãŒå
ã®ãã¹ã¿ãŒãäžèŠ§è¡šç€ºã§ããŸããã§ããïŒã¢ããã°ã¬ãŒãäžã«https .......ãååŸããŠãã ããã ãããæäŒã£ãŠãã ããã 1.9.3ãã1.10.0ã«ã¢ããã°ã¬ãŒãããŠããŸãã æåã¯ãã[upgrade / staticpods]ã¯ãã¬ãããã³ã³ããŒãã³ããåèµ·åããã®ãåŸ
ã£ãŠããŸãããšããç¹å®ã®ãã€ã³ãã«å°éããããšãã§ããŸããã
äžæçãªåé¿çã¯ã蚌ææžã確èªãããã§ãã¯ããã€ãã¹ããŠetcdããããšapiserverããããã¢ããã°ã¬ãŒãããããšã§ãã
å¿ ãæ§æã確èªãããŠãŒã¹ã±ãŒã¹ã®ãã©ã°ãè¿œå ããŠãã ããã
kubectl -n kube-system edit cm kubeadm-config # change featureFlags
...
featureGates: {}
...
kubeadm alpha phase certs all
kubeadm alpha phase etcd local
kubeadm alpha phase controlplane all
kubeadm alpha phase upload-config
ããããšã@stealthybox
ç§ã®å Žåã upgrade apply
ããã»ã¹ã¯[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.1"...
åæ¢ããŸããããã¯ã©ã¹ã¿ãŒã¯æ£åžžã«ã¢ããã°ã¬ãŒããããŸããã
@stealthyboxããããããŸãããã
kubeadm upgrade plan
ããã®åŸãã³ã°ããããããããã®æé ã®åŸã§äœããå£ããŠããããã§ãã
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.10.1
[upgrade/versions] kubeadm version: v1.10.1
[upgrade/versions] Latest stable version: v1.10.1
ã¢ããããŒããé©çšãããšã [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.1"...
ããã³ã°ããŸãã
@kvaps @stealthyboxããã¯ããããetcd
åé¡ã§ãïŒ kubeadm
ã¯TLS察å¿ã®etcd
ã«å¯ŸããŠæçœãªHTTP/2
ã話ããŸãïŒãç§ãããããããããŸããã ãã®ä»ã®åé¡ãåç
§ããŠãã ããïŒ https ïŒ
æ£çŽãªãšãããTLSãªã¹ããŒãšéTLS etcd
ãªã¹ããŒã®äž¡æ¹ã«åãTCPããŒãã䜿çšãããŠããçç±ãããããŸããããã®ãããªåé¡ãçºçããã ãã§ãã ãããããããå€ã_connection refused_ãååŸãããšãããã«ãã³ããåŸãããŸããããã§ã¯ãäœãèµ·ãã£ãŠããã®ããç解ããããã«tcpdump
ã«é Œããªããã°ãªããŸããã§ããã
ããïŒ
æ£è§£ã§ããããã¯ãEtcdã¹ããŒã¿ã¹ãã§ãã¯çšã®ããŒã«ã«TLSãããã§ã®ã¿æ©èœããŸãã
ãããå®è¡ããŠãã¢ããã°ã¬ãŒããå®äºããŸãã
kubeadm alpha phase controlplane all
kubeadm alpha phase upload-config
äžèšã®åé¿çãæ£ããç·šéããŸãã
@ stealthybox2çªç®ã®kubeadmã³ãã³ããæ©èœããŸããïŒ
# kubeadm alpha phase upload-config
The --config flag is mandatory
@renichã¯ãèšå®ã®ãã¡ã€ã«ãã¹ã
ã«ã¹ã¿ã èšå®ã䜿çšããªãå Žåã¯ã空ã®ãã¡ã€ã«ãæž¡ãããšãã§ããŸãã
ãããbashã§è¡ãç°¡åãªæ¹æ³ã¯æ¬¡ã®ãšããã§ãã
1.10_kubernetes/server/bin/kubeadm alpha phase upload-config --config <(echo)
ããã¯https://github.com/kubernetes/kubernetes/pull/62655ã®ããŒãžã§è§£æ±ºãããã¯ãã§ãããv1.10.2ãªãªãŒã¹ã®äžéšã«ãªããŸãã
kubeadm1.10.2ã䜿çšãã1.10.0-> 1.10.2ã®ã¢ããã°ã¬ãŒããã¹ã ãŒãºã§ãã¿ã€ã ã¢ãŠãããªãããšã確èªã§ããŸãã
ç§ã¯ãŸã 1.10.0-> 1.10.2ã§ã¿ã€ã ã¢ãŠãããããŸãããå¥ã®ã¿ã€ã ã¢ãŠãããããŸãïŒ
[upgrade/staticpods] Waiting for the kubelet to restart the component
Static pod: kube-apiserver-master hash: a273591d3207fcd9e6fd0c308cc68d64
[upgrade/apply] FATAL: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [timed out waiting for the condition]
ã©ããããããã®ãããããªã...
@ denis111ã¯ã docker ps
ã䜿çšããŠã¢ããã°ã¬ãŒããå®è¡ããŠãããšãã«ãAPIãµãŒããŒã®ãã°ã確èªããŸãã ç§ãçŽé¢ããŠããåé¡ã«çŽé¢ããŠããããã«æããŸãã
@dvdmuckleãããšããã®ãã°ã«ã¯ãšã©ãŒã¯è¡šç€ºãããŸãã
ãããŠãkube-apiserverã®ããã·ã¥ã¯ã¢ããã°ã¬ãŒãäžã«å€æŽãããªããšæããŸãã
1.9.3ã«ARM64ã¯ã©ã¹ã¿ãŒãããã1.9.7ã«æ£åžžã«æŽæ°ãããŸãããã1.9.7ãã1.10.2ã«ã¢ããã°ã¬ãŒããããšãã«åãã¿ã€ã ã¢ãŠãã®åé¡ãçºçããŸããã
kubeadmãç·šéããŠåã³ã³ãã€ã«ããã¿ã€ã ã¢ãŠããå¢ãããŠã¿ãŸããïŒãããã®æåŸã®ã³ãããhttps://github.com/anguslees/kubernetes/commits/kubeadm-gusforkã®ããã«ïŒãåãçµæãåŸãããŸããã
$ sudo kubeadm upgrade apply v1.10.2 --force
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to change the cluster version to "v1.10.2"
[upgrade/versions] Cluster version: v1.9.7
[upgrade/versions] kubeadm version: v1.10.2-dirty
[upgrade/version] Found 1 potential version compatibility errors but skipping since the --force flag is set:
- Specified version to upgrade to "v1.10.2" is higher than the kubeadm version "v1.10.2-dirty". Upgrade kubeadm first using the tool you used to install kubeadm
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.2"...
Static pod: kube-apiserver-kubemaster1 hash: ed7578d5bf9314188dca798386bcfb0e
Static pod: kube-controller-manager-kubemaster1 hash: e0c3f578f1c547dcf9996e1d3390c10c
Static pod: kube-scheduler-kubemaster1 hash: 52e767858f52ac4aba448b1a113884ee
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-kubemaster1 hash: 413224efa82e36533ce93e30bd18e3a8
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests346927148/etcd.yaml"
[certificates] Using the existing etcd/ca certificate and key.
[certificates] Using the existing etcd/server certificate and key.
[certificates] Using the existing etcd/peer certificate and key.
[certificates] Using the existing etcd/healthcheck-client certificate and key.
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests190581659/etcd.yaml"
[upgrade/staticpods] Not waiting for pod-hash change for component "etcd"
[upgrade/etcd] Waiting for etcd to become available
[util/etcd] Waiting 30s for initial delay
[util/etcd] Attempting to get etcd status 1/10
[util/etcd] Attempt failed with error: dial tcp 127.0.0.1:2379: getsockopt: connection refused
[util/etcd] Waiting 15s until next retry
[util/etcd] Attempting to get etcd status 2/10
[util/etcd] Attempt failed with error: dial tcp 127.0.0.1:2379: getsockopt: connection refused
[util/etcd] Waiting 15s until next retry
[util/etcd] Attempting to get etcd status 3/10
[util/etcd] Attempt failed with error: dial tcp 127.0.0.1:2379: getsockopt: connection refused
[util/etcd] Waiting 15s until next retry
[util/etcd] Attempting to get etcd status 4/10
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests346927148"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests346927148/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests346927148/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests346927148/kube-scheduler.yaml"
[upgrade/staticpods] The etcd manifest will be restored if component "kube-apiserver" fails to upgrade
[certificates] Using the existing etcd/ca certificate and key.
[certificates] Using the existing apiserver-etcd-client certificate and key.
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests190581659/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/apply] FATAL: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [timed out waiting for the condition]
v1.10.2-> v1.10.2ãã¢ããã°ã¬ãŒãããŸãïŒããã¯æå³ããªããããããŸããããã¹ãããã ãã§ã...ïŒ
Ubuntu16.04ã
ãããŠãããã¯ãšã©ãŒã§å€±æããŸãã
kubeadm upgrade apply v1.10.2
[upgrade/apply] FATAL: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [timed out waiting for the condition]
ããã¯ãŸã ããã€ãã®åé¡ã§è¿œè·¡ãããŠããã®ã ããã...èŠã€ãããŸããã§ããã
timed out waiting for the condition
ãšã©ãŒã§ã¢ããã°ã¬ãŒãããŸã 倱æããŠããã®ãèŠãããŸãã
ç·šéïŒãã£ã¹ã«ãã·ã§ã³ãæ°ãããã±ããhttps://github.com/kubernetes/kubeadm/issues/850ã«ç§»åããŸãããããã§ãã£ã¹ã«ãã·ã§ã³ããŠãã ããã
ä»ã®èª°ãã1.9.xã§ãã®åé¡ãæ±ããŠããå ŽåïŒ
ã«ã¹ã¿ã ãã¹ãåãæã€awsã䜿çšããŠããå Žåã¯ãkubeadm-config configmapãç·šéããnodeNameã§awså éšåãèšå®ããå¿ èŠããããŸãïŒip-xx-xx-xx-xxã$ REGION.compute.internalïŒ
kubectl -n kube-system edit cm kubeadm-config -oyaml
ããã¯ãetcã¯ã©ã€ã¢ã³ããhttpã«èšå®ãã以å€ã«ã 圌ãããããä¿®æ£ãããã©ããã確èªããããã«ãç§ã¯ãŸã ã¬ã¿ãŒããŒãžã§ã³ã䜿çšããŠããŸããã
ããã¯ãkubeadmãAPIã§ãã®ãã¹ãèªã¿åãããšããããã§ãïŒ/ api / v1 / namespaces / kube-system / pods / kube-apiserver- $ NodeName
1.10.6ã§ã¿ã€ã ã¢ãŠããå¢å ãããããæ°é±éåã«1.9.7ãããã€ã¡ã³ãã1.10.6ã«æ£åžžã«æŽæ°ããŸããã
ãã®ããŒãžã§ã³ã§ãåãå€æŽãå ããããŠããããã.debããã±ãŒãžã®æºåãã§ã次第1.11.2ã«ã¢ããã°ã¬ãŒãããããšãèšç»ããŠããŸãã
ç§ã®ã¯ã©ã¹ã¿ãŒã¯ãARM64ããŒãäžã§ãªã³ãã¬ãã¹ã§å®è¡ãããŸãã
æãåèã«ãªãã³ã¡ã³ã
äžæçãªåé¿çã¯ã蚌ææžã確èªãããã§ãã¯ããã€ãã¹ããŠetcdããããšapiserverããããã¢ããã°ã¬ãŒãããããšã§ãã
å¿ ãæ§æã確èªãããŠãŒã¹ã±ãŒã¹ã®ãã©ã°ãè¿œå ããŠãã ããã