๋ฒ๊ทธ ๋ณด๊ณ ์
kubeadm ๋ฒ์ ( kubeadm version
) :
kubeadm ๋ฒ์ : & version.Info {Major : "1", Minor : "10", GitVersion : "v1.10.0", GitCommit : "fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState : "clean", BuildDate : "2018-03-26T16 : 44 : 10Z ", GoVersion :"go1.9.3 ", ์ปดํ์ผ๋ฌ :"gc ", ํ๋ซํผ :"linux / amd64 "}
ํ๊ฒฝ :
kubectl version
) :ํด๋ผ์ด์ธํธ ๋ฒ์ : version.Info {Major : "1", Minor : "9", GitVersion : "v1.9.6", GitCommit : "9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState : "clean", BuildDate : "2018-03-21T15 : 21 : 50Z ", GoVersion :"go1.9.3 ", ์ปดํ์ผ๋ฌ :"gc ", ํ๋ซํผ :"linux / amd64 "}
์๋ฒ ๋ฒ์ : version.Info {Major : "1", Minor : "9", GitVersion : "v1.9.6", GitCommit : "9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState : "clean", BuildDate : "2018-03-21T15 : 13 : 31Z ", GoVersion :"go1.9.3 ", ์ปดํ์ผ๋ฌ :"gc ", ํ๋ซํผ :"linux / amd64 "}
Scaleway ๋ฒ ์ด ๋ฉํ C2S
Ubuntu Xenial (16.04 LTS) (GNU / Linux 4.4.122-mainline-rev1 x86_64)
uname -a
) :Linux amd64-master-1 4.4.122-mainline-rev1 # 1 SMP Sun Mar 18 10:44:19 UTC 2018 x86_64 x86_64 x86_64 GNU / Linux
1.9.6์์ 1.10.0์ผ๋ก ์ ๊ทธ๋ ์ด๋ํ๋ ค๊ณ ํ๋ฉด์ด ์ค๋ฅ๊ฐ ๋ฐ์ํฉ๋๋ค.
kubeadm upgrade apply v1.10.0
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to change the cluster version to "v1.10.0"
[upgrade/versions] Cluster version: v1.9.6
[upgrade/versions] kubeadm version: v1.10.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.0"...
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests411909119/etcd.yaml"
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [arm-master-1] and IPs [10.1.244.57]
[certificates] Generated etcd/healthcheck-client certificate and key.
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests180476754/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/apply] FATAL: fatal error when trying to upgrade the etcd cluster: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [timed out waiting for the condition], rolled the state back to pre-upgrade state
์ฑ๊ณต์ ์ธ ์ ๊ทธ๋ ์ด๋
1.9.6 ํจํค์ง๋ฅผ ์ค์นํ๊ณ 1.9.6 ํด๋ฌ์คํฐ๋ฅผ ์ด๊ธฐํํฉ๋๋ค.
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list
apt-get update -qq
apt-get install -qy kubectl=1.9.6-00
apt-get install -qy kubelet=1.9.6-00
apt-get install -qy kubeadm=1.9.6-00
kubeadm-config๋ฅผ ํธ์งํ๊ณ https://github.com/kubernetes/kubernetes/issues/61764 ์๋ณด๊ณ ๋๋๋ก featureGates๋ฅผ ๋ฌธ์์ด์์ ๋งคํ์ผ๋ก ๋ณ๊ฒฝํฉ๋๋ค.
kubectl -n kube-system edit cm kubeadm-config
....
featureGates: {}
....
kubeadm 1.10.0์ ๋ค์ด๋ก๋ํ๊ณ kubeadm upgrade plan
๋ฐ kubeadm upgrade apply v1.10.0
ํฉ๋๋ค.
์ด ๋ฒ๊ทธ๋ฅผ ๋ก์ปฌ์์ ์ฌํํ๋ ์ค์ ๋๋ค.
์ด๊ฒ์ 10 ๋ฒ ์ฌ ์๋ํ ํ ๋ง์นจ๋ด ์๋ํ์ต๋๋ค.
๋ค์์ ๋ด etcd ๋งค๋ํ์คํธ diff์
๋๋ค.
``` root @ vagrant : ~ # diff /etc/kubernetes/manifests/etcd.yaml /etc/kubernetes/tmp/kubeadm-backup-manifests858209931/etcd.yaml
16,17c16,17
<---listen-client-urls = https://127.0.0.1 : 2379
- --listen-client-urls=http://127.0.0.1:2379 - --advertise-client-urls=http://127.0.0.1:2379
19,27c19
<---key-file = / etc / kubernetes / pki / etcd / server.key
<---trusted-ca-file = / etc / kubernetes / pki / etcd / ca.crt
<---peer-cert-file = / etc / kubernetes / pki / etcd / peer.crt
<---peer-key-file = / etc / kubernetes / pki / etcd / peer.key
<---client-cert-auth = true
<---peer-client-cert-auth = true
<---cert-file = / etc / kubernetes / pki / etcd / server.crt
<---peer-trusted-ca-file = / etc / kubernetes / pki / etcd / ca.crt<์ด๋ฏธ์ง : gcr.io/google_containers/etcd-amd64:3.1.12
image: gcr.io/google_containers/etcd-amd64:3.1.11
29,35d20
<์์ :
<๋ช ๋ น :
<-/ bin / sh
<--ec
<-ETCDCTL_API = 3 etcdctl --endpoints = 127.0.0.1 : 2379 --cacert = / etc / kubernetes / pki / etcd / ca.crt
<--cert = / etc / kubernetes / pki / etcd / healthcheck-client.crt --key = / etc / kubernetes / pki / etcd / healthcheck-client.key
<get foo
36a22,26
httpGet :
ํธ์คํธ : 127.0.0.1
๊ฒฝ๋ก : / health
ํฌํธ : 2379
์ฒด๊ณ : HTTP
43,45c33
<์ด๋ฆ : etcd-data
<-mountPath : / etc / kubernetes / pki / etcd<์ด๋ฆ : etcd-certs
name: etcd
51,55c39
<์ด๋ฆ : etcd-data
<-ํธ์คํธ ๊ฒฝ๋ก :
<๊ฒฝ๋ก : / etc / kubernetes / pki / etcd
<์ ํ : DirectoryOrCreate<์ด๋ฆ : etcd-certs
name: etcd
root @ vagrant : ~ # ls / etc / kubernetes / pki / etcd
ca.crt ca.key healthcheck-client.crt healthcheck-client.key peer.crt peer.key server.crt server.key
1.9.6 Ubuntu 17.10 Vagrant์ ํด๋ฌ์คํฐ :
root<strong i="6">@vagrant</strong>:/vagrant# 1.10_kubernetes/server/bin/kubeadm upgrade apply v1.10.0
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to change the cluster version to "v1.10.0"
[upgrade/versions] Cluster version: v1.9.6
[upgrade/versions] kubeadm version: v1.10.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.0"...
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests262738652/etcd.yaml"
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [vagrant] and IPs [10.0.2.15]
[certificates] Generated etcd/healthcheck-client certificate and key.
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests858209931/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Error getting Pods with label selector "component=etcd" [the server was unable to return a response in the time allotted, but may still be processing the request (get pods)]
[apiclient] Error getting Pods with label selector "component=etcd" [Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Detcd: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""]
[apiclient] Error getting Pods with label selector "component=etcd" [Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Detcd: net/http: TLS handshake timeout]
[apiclient] Error getting Pods with label selector "component=etcd" [the server was unable to return a response in the time allotted, but may still be processing the request (get pods)]
[apiclient] Error getting Pods with label selector "component=etcd" [Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Detcd: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""]
[upgrade/apply] FATAL: fatal error when trying to upgrade the etcd cluster: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [timed out waiting for the condition], rolled the state back to pre-upgrade state
์ด๊ฒ์ ๋ด ์ฌํ ํ๊ฒฝ์ ๋๋ค : https://github.com/stealthybox/vagrant-kubeadm-testing
๋ถํธ ์คํธ๋ฉ์ ๋ํด ๋ค์ ์ค์ 1.9.6-00
๋ก ๋ณ๊ฒฝํฉ๋๋ค. https://github.com/stealthybox/vagrant-kubeadm-testing/blob/9d4493e990c9bd742107b317641267c3ef3640cd/Vagrantfile#L18 -L20
๊ทธ๋ฐ ๋ค์ 1.10 ์๋ฒ ๋ฐ์ด๋๋ฆฌ๋ฅผ ์ ์ฅ์๋ก ๋ค์ด๋ก๋ํ๋ฉด ๊ฒ์คํธ์์ /vagrant
์์ ์ฌ์ฉํ ์ ์์ต๋๋ค.
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#server -binaries
kubelet etcd ๊ด๋ จ ๋ก๊ทธ :
root<strong i="6">@vagrant</strong>:~# journalctl -xefu kubelet | grep -i etcd
Mar 28 16:32:07 vagrant kubelet[14676]: W0328 16:32:07.808776 14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(7278f85057e8bf5cb81c9f96d3b25320)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:32:07 vagrant kubelet[14676]: I0328 16:32:07.880412 14676 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd" (UniqueName: "kubernetes.io/host-path/7278f85057e8bf5cb81c9f96d3b25320-etcd") pod "etcd-vagrant" (UID: "7278f85057e8bf5cb81c9f96d3b25320")
Mar 28 16:34:27 vagrant kubelet[14676]: W0328 16:34:27.472534 14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(7278f85057e8bf5cb81c9f96d3b25320)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:57:33 vagrant kubelet[14676]: W0328 16:57:33.683648 14676 kubelet.go:1597] Deleting mirror pod "etcd-vagrant_kube-system(122348c3-32a6-11e8-8dc5-080027d6be16)" because it is outdated
Mar 28 16:57:33 vagrant kubelet[14676]: I0328 16:57:33.725564 14676 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-certs") pod "etcd-vagrant" (UID: "37936d2107e31b457cada6c2433469f1")
Mar 28 16:57:33 vagrant kubelet[14676]: I0328 16:57:33.725637 14676 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-data") pod "etcd-vagrant" (UID: "37936d2107e31b457cada6c2433469f1")
Mar 28 16:57:35 vagrant kubelet[14676]: E0328 16:57:35.484901 14676 kuberuntime_container.go:66] Can't make a ref to pod "etcd-vagrant_kube-system(7278f85057e8bf5cb81c9f96d3b25320)", container etcd: selfLink was empty, can't make reference
Mar 28 16:57:35 vagrant kubelet[14676]: I0328 16:57:35.889458 14676 reconciler.go:191] operationExecutor.UnmountVolume started for volume "etcd" (UniqueName: "kubernetes.io/host-path/7278f85057e8bf5cb81c9f96d3b25320-etcd") pod "7278f85057e8bf5cb81c9f96d3b25320" (UID: "7278f85057e8bf5cb81c9f96d3b25320")
Mar 28 16:57:35 vagrant kubelet[14676]: I0328 16:57:35.889595 14676 operation_generator.go:643] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7278f85057e8bf5cb81c9f96d3b25320-etcd" (OuterVolumeSpecName: "etcd") pod "7278f85057e8bf5cb81c9f96d3b25320" (UID: "7278f85057e8bf5cb81c9f96d3b25320"). InnerVolumeSpecName "etcd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 28 16:57:35 vagrant kubelet[14676]: I0328 16:57:35.989892 14676 reconciler.go:297] Volume detached for volume "etcd" (UniqueName: "kubernetes.io/host-path/7278f85057e8bf5cb81c9f96d3b25320-etcd") on node "vagrant" DevicePath ""
Mar 28 16:58:03 vagrant kubelet[14676]: E0328 16:58:03.688878 14676 mirror_client.go:88] Failed deleting a mirror pod "etcd-vagrant_kube-system": Timeout: request did not complete within allowed duration
Mar 28 16:58:03 vagrant kubelet[14676]: E0328 16:58:03.841447 14676 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-vagrant.152023ff626cfbc5", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-vagrant", UID:"37936d2107e31b457cada6c2433469f1", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"SuccessfulMountVolume", Message:"MountVolume.SetUp succeeded for volume \"etcd-certs\" ", Source:v1.EventSource{Component:"kubelet", Host:"vagrant"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f713e59c5, ext:1534226953099, loc:(*time.Location)(0x5859e60)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f713e59c5, ext:1534226953099, loc:(*time.Location)(0x5859e60)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Timeout: request did not complete within allowed duration' (will not retry!)
Mar 28 16:58:33 vagrant kubelet[14676]: E0328 16:58:33.844276 14676 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-vagrant.152023ff626cfb82", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-vagrant", UID:"37936d2107e31b457cada6c2433469f1", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"SuccessfulMountVolume", Message:"MountVolume.SetUp succeeded for volume \"etcd-data\" ", Source:v1.EventSource{Component:"kubelet", Host:"vagrant"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f713e5982, ext:1534226953033, loc:(*time.Location)(0x5859e60)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f713e5982, ext:1534226953033, loc:(*time.Location)(0x5859e60)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Timeout: request did not complete within allowed duration' (will not retry!)
Mar 28 16:59:03 vagrant kubelet[14676]: E0328 16:59:03.692450 14676 kubelet.go:1612] Failed creating a mirror pod for "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": the server was unable to return a response in the time allotted, but may still be processing the request (post pods)
Mar 28 16:59:03 vagrant kubelet[14676]: E0328 16:59:03.848007 14676 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-vagrant.152023ff641f915f", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-vagrant", UID:"7278f85057e8bf5cb81c9f96d3b25320", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{etcd}"}, Reason:"Killing", Message:"Killing container with id docker://etcd:Need to kill Pod", Source:v1.EventSource{Component:"kubelet", Host:"vagrant"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f72f0ef5f, ext:1534255433999, loc:(*time.Location)(0x5859e60)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f72f0ef5f, ext:1534255433999, loc:(*time.Location)(0x5859e60)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Timeout: request did not complete within allowed duration' (will not retry!)
Mar 28 16:59:14 vagrant kubelet[14676]: W0328 16:59:14.472661 14676 kubelet.go:1597] Deleting mirror pod "etcd-vagrant_kube-system(122348c3-32a6-11e8-8dc5-080027d6be16)" because it is outdated
Mar 28 16:59:14 vagrant kubelet[14676]: W0328 16:59:14.473138 14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:59:14 vagrant kubelet[14676]: E0328 16:59:14.473190 14676 mirror_client.go:88] Failed deleting a mirror pod "etcd-vagrant_kube-system": Delete https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:59:14 vagrant kubelet[14676]: E0328 16:59:14.473658 14676 kubelet.go:1612] Failed creating a mirror pod for "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Post https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:59:15 vagrant kubelet[14676]: W0328 16:59:15.481336 14676 kubelet.go:1597] Deleting mirror pod "etcd-vagrant_kube-system(122348c3-32a6-11e8-8dc5-080027d6be16)" because it is outdated
Mar 28 16:59:15 vagrant kubelet[14676]: E0328 16:59:15.483705 14676 mirror_client.go:88] Failed deleting a mirror pod "etcd-vagrant_kube-system": Delete https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:59:15 vagrant kubelet[14676]: E0328 16:59:15.497391 14676 kubelet.go:1612] Failed creating a mirror pod for "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Post https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 17:00:34 vagrant kubelet[14676]: W0328 17:00:34.475851 14676 kubelet.go:1597] Deleting mirror pod "etcd-vagrant_kube-system(122348c3-32a6-11e8-8dc5-080027d6be16)" because it is outdated
Mar 28 17:01:07 vagrant kubelet[14676]: W0328 17:01:07.720076 14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: http2: server sent GOAWAY and closed the connection; LastStreamID=47, ErrCode=NO_ERROR, debug=""
Mar 28 17:01:07 vagrant kubelet[14676]: E0328 17:01:07.720107 14676 mirror_client.go:88] Failed deleting a mirror pod "etcd-vagrant_kube-system": Delete https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: http2: server sent GOAWAY and closed the connection; LastStreamID=47, ErrCode=NO_ERROR, debug=""; some request body already written
Mar 28 17:01:07 vagrant kubelet[14676]: E0328 17:01:07.725335 14676 kubelet.go:1612] Failed creating a mirror pod for "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Post https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 17:01:07 vagrant kubelet[14676]: I0328 17:01:07.728709 14676 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd" (UniqueName: "kubernetes.io/host-path/7278f85057e8bf5cb81c9f96d3b25320-etcd") pod "etcd-vagrant" (UID: "7278f85057e8bf5cb81c9f96d3b25320")
Mar 28 17:01:07 vagrant kubelet[14676]: W0328 17:01:07.734475 14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 17:01:07 vagrant kubelet[14676]: W0328 17:01:07.740642 14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(7278f85057e8bf5cb81c9f96d3b25320)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 17:01:09 vagrant kubelet[14676]: E0328 17:01:09.484412 14676 kuberuntime_container.go:66] Can't make a ref to pod "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)", container etcd: selfLink was empty, can't make reference
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.848794 14676 reconciler.go:191] operationExecutor.UnmountVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-certs") pod "37936d2107e31b457cada6c2433469f1" (UID: "37936d2107e31b457cada6c2433469f1")
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.849282 14676 reconciler.go:191] operationExecutor.UnmountVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-data") pod "37936d2107e31b457cada6c2433469f1" (UID: "37936d2107e31b457cada6c2433469f1")
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.849571 14676 operation_generator.go:643] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-data" (OuterVolumeSpecName: "etcd-data") pod "37936d2107e31b457cada6c2433469f1" (UID: "37936d2107e31b457cada6c2433469f1"). InnerVolumeSpecName "etcd-data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.849503 14676 operation_generator.go:643] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-certs" (OuterVolumeSpecName: "etcd-certs") pod "37936d2107e31b457cada6c2433469f1" (UID: "37936d2107e31b457cada6c2433469f1"). InnerVolumeSpecName "etcd-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.949925 14676 reconciler.go:297] Volume detached for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-certs") on node "vagrant" DevicePath ""
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.949975 14676 reconciler.go:297] Volume detached for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-data") on node "vagrant" DevicePath ""
ํ์ฌ ํด๊ฒฐ ๋ฐฉ๋ฒ์ ์ ๊ทธ๋ ์ด๋๋ฅผ ๊ณ์ ์ฌ ์๋ํ๋ ๊ฒ์ด๋ฉฐ ์ด๋ ์์ ์์ ์ฑ๊ณตํ ๊ฒ์ ๋๋ค.
@stealthybox etcd ์ปจํ
์ด๋์ ๋์ปค์์ ๋ก๊ทธ๋ฅผ ๊ฐ์ ธ ์ค์
จ์ต๋๊น? ๋ํ grep -i etcd
์ (๋) ์ผ๋ถ kubelet ์ถ๋ ฅ์ ๋ง์คํน ํ ์ ์์ต๋๋ค (์ : ์ปจํ
์ด๋ ์ด๋ฆ์ด ์์ง๋ง ๊ด๋ จ์ฑ์ด์๋ ์ผ๋ถ ์ค๋ฅ ๋ฉ์์ง).
์ด ๋ฒ๊ทธ์ ๊ด๋ จ๋ ๋ ๋ค๋ฅธ ์ด์ํ ์ฌ๊ฑด์ด ๋ฐ์ํ์ต๋๋ค. kubeadm ์ ๊ทธ๋ ์ด๋๋ ์ etcd ์ด๋ฏธ์ง๋ฅผ ๊ฐ์ ธ์ค๊ณ ์ ์ ์ ํฌ๋๋ฅผ ๋ฐฐํฌํ๊ธฐ ์ ์ etcd ์ ๊ทธ๋ ์ด๋๋ฅผ ์๋ฃ๋ ๊ฒ์ผ๋ก ํ์ํ์ต๋๋ค. ์ด๋ก ์ธํด ์ดํ ๋จ๊ณ์์ ์ ๊ทธ๋ ์ด๋ ์๊ฐ์ด ์ด๊ณผ๋๊ณ ์ ๊ทธ๋ ์ด๋ ๋กค๋ฐฑ์ด ์คํจํฉ๋๋ค. ์ด๋ก ์ธํด ํด๋ฌ์คํฐ๊ฐ ์ค๋จ ๋ ์ํ๊ฐ๋ฉ๋๋ค. ํด๋ฌ์คํฐ๋ฅผ ๋ณต๊ตฌํ๋ ค๋ฉด ์๋ etcd ์ ์ ํฌ๋ ๋งค๋ํ์คํธ๋ฅผ ๋ณต์ํด์ผํฉ๋๋ค.
์ค ๊ทธ๋ ๋๋ ๊ฑฐ๊ธฐ์ ๊ฐํ์์ด. ๋ด ํด๋ฌ์คํฐ๊ฐ ์์ ํ ๋ค์ด๋์์ต๋๋ค. ๋๊ตฐ๊ฐ๊ฐ์ด ์ฃผ์์ ๊ตฌ์ถํ๋ ๋ฐฉ๋ฒ์ ๋ํ ์ง์นจ์ ๊ณต์ ํ ์ ์์ต๋๊น?
@detiber๊ฐ ์ค๋ช ํ ๊ฒ์ฒ๋ผ ๋ ๋ฒ์งธ ์ ๊ทธ๋ ์ด๋ ์๋์
/ etc / kubernetes / tmp์์ ๋ฐฑ์ ๋ ๋ด์ฉ์ ์ฐพ์๊ณ , etcd๊ฐ ๋ฒ์ธ ์ผ ์ ์๋ค๊ณ ์๊ฐํ๊ณ , ์ด์ ๋งค๋ํ์คํธ๋ฅผ ๋งค๋ํ์คํธ ํด๋์ ์ ๋งค๋ํ์คํธ ์์ ๋ณต์ฌํ์ต๋๋ค. ๊ทธ ์์ ์์ ๋๋ ์์ ๊ฒ์ด ์์์ต๋๋ค. ์๋ํ๋ฉด ํด๋ฌ์คํฐ์ ๋ํ ํต์ ๋ ฅ์ ์์ ํ ์์ ๊ธฐ ๋๋ฌธ์ ๋๋ค. ๊ทธ๋ฐ ๋ค์ ์ ํํ ๊ธฐ์ต์ด ๋์ง ์์ง๋ง ์ ์ฒด ์์คํ ์ ๋ค์ ์์ํ๊ณ ๋์ค์ ๋ชจ๋ ํญ๋ชฉ์ v1.9.6์ผ๋ก ๋ค์ด ๊ทธ๋ ์ด๋ ํ ๊ฒ ๊ฐ์ต๋๋ค. ๊ฒฐ๊ตญ ๋๋ ํด๋ฌ์คํฐ์ ๋ํ ์ ์ด๊ถ์ ์ป์๊ณ v1.10.0์ ๋ค์ ์๋ง์ผ๋ก ๋ง๋ค ๋๊ธฐ๋ฅผ ์์์ต๋๋ค. ์ ํ ์ฌ๋ฏธ ์์์ด์ ...
/etc/kubernetes/tmp
์์ etcd ์ ์ ํฌ๋ ๋งค๋ํ์คํธ๋ฅผ ๋กค๋ฐฑํ๋ ๊ฒฝ์ฐ 1.10์ ์๋ก์ด TLS ๊ตฌ์ฑ์ผ๋ก ์ธํด apiserver ๋งค๋ํ์คํธ๋ฅผ 1.9 ๋ฒ์ ์ผ๋ก ๋กค๋ฐฑํ๋ ๊ฒ๋ ์ค์ํฉ๋๋ค.
^ etcd ์ ๊ทธ๋ ์ด๋๊ฐ ๋๋จธ์ง ์ปจํธ๋กค ํ๋ ์ธ ์ ๊ทธ๋ ์ด๋๋ฅผ ์ฐจ๋จํ๋ค๊ณ ๋ฏฟ๊ธฐ ๋๋ฌธ์ ์๋ง๋ ์ด๊ฒ์ ํ ํ์๊ฐ ์์ ๊ฒ์ ๋๋ค.
etcd ๋งค๋ํ์คํธ ๋ง ์ ๊ทธ๋ ์ด๋ ์คํจ์ ๋กค๋ฐฑ๋์ง ์๋ ๊ฒ ๊ฐ์ต๋๋ค. ๋ค๋ฅธ ๋ชจ๋ ๊ฒ์ ๊ด์ฐฎ์ต๋๋ค. ๋ฐฑ์ ๋งค๋ํ์คํธ๋ฅผ ์ด๋ํ๊ณ kubelet์ ๋ค์ ์์ํ๋ฉด ๋ชจ๋ ๊ฒ์ด ์ ์์ ์ผ๋ก ๋์์ต๋๋ค.
๋๋ ๊ฐ์ ์๊ฐ ์ด๊ณผ ๋ฌธ์ ์ ์ง๋ฉดํ๊ณ kubeadm์ kube-apiserv ๋งค๋ํ์คํธ๋ฅผ 1.9.6์ผ๋ก ๋กค๋ฐฑํ์ง๋ง etcd ๋งค๋ํ์คํธ๋ฅผ ๊ทธ๋๋ก ๋์์ต๋๋ค (์ฝ๊ธฐ : TLS ์ฌ์ฉ). ๋ถ๋ช ํ apiserv๊ฐ ๋น์ฐธํ๊ฒ ์คํจํ์ฌ ๋ด ๋ง์คํฐ ๋ ธ๋๋ฅผ ํจ๊ณผ์ ์ผ๋ก ์์์ํต๋๋ค. ๋ณ๋์ ๋ฌธ์ ๋ณด๊ณ ์๋ฅผ ์์ฑํ๊ธฐ์ ์ข์ ํ๋ณด๋ผ๊ณ ์๊ฐํฉ๋๋ค.
@dvdmuckle @codepainters , ์ํ๊น๊ฒ๋ ๋กค๋ฐฑ ์ฑ๊ณต ์ฌ๋ถ๋ ๊ฒฝ์ ์กฐ๊ฑด (etcd ๋๋ api ์๋ฒ)์ ๋ง๋ ๊ตฌ์ฑ ์์์ ๋ฐ๋ผ ๋ค๋ฆ ๋๋ค. ๊ฒฝ์ ์กฐ๊ฑด์ ๋ํ ์์ ์ฌํญ์ ์ฐพ์์ง๋ง kubeadm ์ ๊ทธ๋ ์ด๋๋ฅผ ์์ ํ ์ค๋จํฉ๋๋ค. ์ ๋ @stealthybox ์ ํ๋ ฅํ์ฌ ์ ๊ทธ๋ ์ด๋๋ฅผ ์ ์ ํ ์์ ํ๊ธฐ์ํ ์ ์ ํ ๊ฒฝ๋ก๋ฅผ ์ฐพ์ผ๋ ค๊ณ ๋ ธ๋ ฅํ๊ณ ์์ต๋๋ค.
@codepainters ๊ฐ์ ๋ฌธ์ ๋ผ๊ณ ์๊ฐํฉ๋๋ค.
์ด ๋ฌธ์ ๋ฅผ ์ผ์ผํค๋ ๋ช ๊ฐ์ง ๊ทผ๋ณธ์ ์ธ ๋ฌธ์ ๊ฐ ์์ต๋๋ค.
๊ฒฐ๊ณผ์ ์ผ๋ก ํ์ฌ ์ ๊ทธ๋ ์ด๋๋ kubelet์ด etcd์ ๋ํ ์ ์ ์ ๋งค๋ํ์คํธ๋ฅผ ์ ํํ๊ธฐ ์ ์ ํด์๋ฅผ ๋ณ๊ฒฝํ๋ etcd ํฌ๋์ ๋ํ ํฌ๋ ์ํ ์ ๋ฐ์ดํธ๊ฐ์๋ ๊ฒฝ์ฐ์๋ง ์ฑ๊ณตํฉ๋๋ค. ๋ํ apiserver ๋งค๋ํ์คํธ๋ฅผ ์ ๋ฐ์ดํธํ๊ธฐ ์ ์ ์ ๊ทธ๋ ์ด๋ ๋๊ตฌ๊ฐ api๋ฅผ ์ฟผ๋ฆฌ ํ ๋ api ์๋ฒ๋ฅผ apiserver ์ ๊ทธ๋ ์ด๋์ ์ฒซ ๋ฒ์งธ ๋ถ๋ถ์ ์ฌ์ฉํ ์ ์์ด์ผํฉ๋๋ค.
@detiber ์ ์ ๋ ์
๊ทธ๋ ์ด๋ ํ๋ก์ธ์ค์ ํ์ํ ๋ณ๊ฒฝ ์ฌํญ์ ๋ํด ๋
ผ์ํ๊ธฐ ์ํด ์ ํ๋ฅผ ๋ฐ์์ต๋๋ค.
1.10.x ํจ์น ๋ฆด๋ฆฌ์ค์์์ด ๋ฒ๊ทธ์ ๋ํ 3 ๊ฐ์ง ์์ ์ฌํญ์ ๊ตฌํํ ๊ณํ์
๋๋ค.
์
๊ทธ๋ ์ด๋์์ etcd TLS๋ฅผ ์ ๊ฑฐํ์ญ์์ค.
ํ์ฌ ์
๊ทธ๋ ์ด๋ ๋ฃจํ๋ ๊ตฌ์ฑ ์์๋ณ๋ก ์ผ๊ด ์์ ์ ์ง๋ ฌ ๋ฐฉ์์ผ๋ก ์ํํฉ๋๋ค.
๊ตฌ์ฑ ์์ ์
๊ทธ๋ ์ด๋๋ ์ข
์ ๊ตฌ์ฑ ์์ ๊ตฌ์ฑ์ ๋ํ ์ง์์ด ์์ต๋๋ค.
์
๊ทธ๋ ์ด๋๋ฅผ ํ์ธํ๋ ค๋ฉด ํฌ๋ ์ํ๋ฅผ ํ์ธํ๊ธฐ ์ํด APIServer๋ฅผ ์ฌ์ฉํ ์ ์์ด์ผํฉ๋๋ค.
Etcd TLS์๋์ด ๊ณ์ฝ์ ์๋ฐํ๋ ๊ฒฐํฉ ๋ etcd + apiserver ๊ตฌ์ฑ ๋ณ๊ฒฝ์ด ํ์ํฉ๋๋ค.
์ด๊ฒ์์ด ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ์ํ ์ต์ํ์ ์คํ ๊ฐ๋ฅํ ๋ณ๊ฒฝ์ด๋ฉฐ ์
๊ทธ๋ ์ด๋ ๋ ํด๋ฌ์คํฐ๋ ์์ ํ์ง ์์ etcd๋ก ๋จ๊ฒจ ๋ก๋๋ค.
ํฌ๋ ์ํ ๋ณ๊ฒฝ์ ๋ฏธ๋ฌ ํฌ๋ ํด์ ๊ฒฝ์ ์กฐ๊ฑด์ ์์ ํฉ๋๋ค.
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/upgrade/staticpods.go#L189.
์ด์ etcd์ apiserver ํ๋๊ทธ ๊ฐ์ ํธํ์ฑ์ ๊ฐ์ ํ๋ฉด ์
๊ทธ๋ ์ด๋๊ฐ ์ฌ ๋ฐ๋ฆ
๋๋ค.
ํน๋ณํ ๋ณ๋์ ๋จ๊ณ์์ TLS๋ฅผ ์
๊ทธ๋ ์ด๋ํ์ญ์์ค.
Etcd์ APIServer๋ ํจ๊ป ์
๊ทธ๋ ์ด๋ํด์ผํฉ๋๋ค.
kubeadm alpha phase ensure-etcd-tls
?.
์ด ๋จ๊ณ๋ ํด๋ฌ์คํฐ ์
๊ทธ๋ ์ด๋์ ๊ด๊ณ์์ด ์คํํ ์ ์์ด์ผํฉ๋๋ค.
ํด๋ฌ์คํฐ ์
๊ทธ๋ ์ด๋ ์ค์๋ ๋ชจ๋ ๊ตฌ์ฑ ์์๋ฅผ ์
๋ฐ์ดํธํ๊ธฐ ์ ์์ด ๋จ๊ณ๋ฅผ ์คํํด์ผํฉ๋๋ค.
1.11์ ๊ฒฝ์ฐ ๋ค์ ์ ์ํฉ๋๋ค.
๋์ : CRI๋ฅผ ์ฌ์ฉํ์ฌ ํฌ๋ ์ ๋ณด๋ฅผ ๊ฐ์ ธ์ต๋๋ค ( crictl
์ฌ์ฉํ์ฌ ๋ฐ๋ชจ ์คํ ๊ฐ๋ฅ).
์ฃผ์ ์ฌํญ : dockershim ๋ฐ ๊ธฐํ ์ปจํ
์ด๋ ๋ฐํ์์ CRI๋ ํ์ฌ CRI ์ฃผ์ ๋ณ๊ฒฝ ์ฌํญ์ ๋ํ ํ์ ํธํ์ฑ์ ์ง์ํ์ง ์์ต๋๋ค.
์ ์ ํฌ๋ ์
๋ฐ์ดํธ ๊ฒฝ์ ์กฐ๊ฑด์ ํด๊ฒฐํ๊ธฐ์ํ PR : https://github.com/kubernetes/kubernetes/pull/61942
๋ฆด๋ฆฌ์ค -1.10 ๋ธ๋์น์ ๋ํ ์ฒด๋ฆฌ ํฝ PR : https://github.com/kubernetes/kubernetes/pull/61954
@detiber ์ฐ๋ฆฌ๊ฐ ๋งํ๋ ๊ฒฝ์ ์กฐ๊ฑด์ ์ค๋ช ํด ์ฃผ์๊ฒ ์ต๋๊น? ๋๋ kubeadm ๋ด๋ถ์ ์ต์ํ์ง ์์ง๋ง ํฅ๋ฏธ๋กญ๊ฒ ๋ค๋ฆฝ๋๋ค.
@codepainters ์ฐธ์กฐ https://github.com/kubernetes/kubeadm/issues/740#issuecomment -377263347
์ฐธ๊ณ -1.9.3์์ ๋์ผํ ๋ฌธ์ / ๋ฌธ์ ์
๊ทธ๋ ์ด๋
์ฌ๋ฌ ๋ฒ ๋ค์ ์๋ํ๋ ํด๊ฒฐ ๋ฐฉ๋ฒ์ ์๋ํ์ต๋๋ค. ๋ง์ง๋ง์ผ๋ก API ์๋ฒ์ ๊ฒฝ์ ์กฐ๊ฑด์ ๋๋ฌํ๊ณ ์
๊ทธ๋ ์ด๋๋ฅผ ๋กค๋ฐฑ ํ ์ ์์ต๋๋ค.
@stealthybox thx, ์ฒ์ ์ฝ์์ ๋ ์ป์ง ๋ชปํ์ต๋๋ค.
๋์ผํ ๋ฌธ์ ๊ฐ ์์ต๋๋ค .. [ERROR APIServerHealth] : API ์๋ฒ๊ฐ ๋น์ ์์
๋๋ค. / healthz๊ฐ "ok"๋ฅผ ๋ฐํํ์ง ์์์ต๋๋ค.
[์ค๋ฅ MasterNodesReady] : ํด๋ฌ์คํฐ์ ๋ง์คํฐ๋ฅผ ๋์ด ํ ์ ์์ต๋๋ค. ์
๊ทธ๋ ์ด๋ํ๋ ๋์ https๋ฅผ ๊ฐ์ ธ์ต๋๋ค ........ ์ด๊ฑธ ๋์์ฃผ์ธ์. 1.9.3์์ 1.10.0์ผ๋ก ์
๊ทธ๋ ์ด๋ํ๊ณ ์์ต๋๋ค. ์ฒ์์๋ "[upgrade / staticpods] kubelet์ด ๊ตฌ์ฑ ์์๋ฅผ ๋ค์ ์์ํ๊ธฐ๋ฅผ ๊ธฐ๋ค๋ฆฌ๋ ์ค"์ด๋ผ๋ ํน์ ์ง์ ์ ๋๋ฌ ํ ์์์์ต๋๋ค.
์์ ํด๊ฒฐ ๋ฐฉ๋ฒ์ ์ธ์ฆ์๋ฅผ ํ์ธํ๊ณ ๊ฒ์ฌ๋ฅผ ์ฐํํ์ฌ etcd ๋ฐ apiserver ํฌ๋๋ฅผ ์ ๊ทธ๋ ์ด๋ํ๋ ๊ฒ์ ๋๋ค.
๊ตฌ์ฑ์ ํ์ธํ๊ณ ์ฌ์ฉ ์ฌ๋ก์ ๋ํ ํ๋๊ทธ๋ฅผ ์ถ๊ฐํ์ญ์์ค.
kubectl -n kube-system edit cm kubeadm-config # change featureFlags
...
featureGates: {}
...
kubeadm alpha phase certs all
kubeadm alpha phase etcd local
kubeadm alpha phase controlplane all
kubeadm alpha phase upload-config
๊ฐ์ฌํฉ๋๋ค @stealthybox
๋๋ฅผ ์ํด upgrade apply
ํ๋ก์ธ์ค๊ฐ [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.1"...
์์ ์ค๋จ๋์์ง๋ง ํด๋ฌ์คํฐ๊ฐ ์ฑ๊ณต์ ์ผ๋ก ์
๊ทธ๋ ์ด๋๋์์ต๋๋ค.
@stealthybox ํ์คํ์ง ์์ง๋ง
kubeadm upgrade plan
์ด (๊ฐ) ๊ทธ ํ์ ์ค๋จ๋๊ธฐ ๋๋ฌธ์์ด ๋จ๊ณ๋ฅผ ์ํ ํ ํ์ ๋ญ๊ฐ ์์๋ ๊ฒ ๊ฐ์ต๋๋ค.
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.10.1
[upgrade/versions] kubeadm version: v1.10.1
[upgrade/versions] Latest stable version: v1.10.1
์
๋ฐ์ดํธ๋ฅผ ์ ์ฉ ํ ๋ [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.1"...
๋ ๊ต์ํ
@kvaps @stealthybox ์ด๊ฒ์ etcd
๋ฌธ์ ์ผ ๊ฐ๋ฅ์ฑ์ด ๊ฐ์ฅ ๋์ต๋๋ค ( kubeadm
๋ TLS ์ง์ etcd
์๊ฒ ํ๋ฒํ HTTP/2
์ ๋งํฉ๋๋ค). ์ด ๋ค๋ฅธ ๋ฌธ์ ๋ฅผ ์ฐธ์กฐํ์ญ์์ค : https://github.com/kubernetes/kubeadm/issues/755
์์งํ, TLS ๋ฐ ๋น TLS etcd
๋ฆฌ์ค๋ ๋ชจ๋์ ๋์ผํ TCP ํฌํธ๊ฐ ์ฌ์ฉ๋๋ ์ด์ ๋ฅผ ์ดํดํ ์ ์์ผ๋ฉฐ, ์ด์ ๊ฐ์ ๋ฌธ์ ๋ง ๋ฐ์ํฉ๋๋ค. ๋ถ๋ช
ํด์ง๋ฉด _ ์ฐ๊ฒฐ์ด ๊ฑฐ๋ถ ๋จ _์ ์ฆ๊ฐ์ ์ธ ํํธ๋ฅผ ์ค ๊ฒ์
๋๋ค. ์ฌ๊ธฐ์ ๋ฌด์จ ์ผ์ด ์ผ์ด๋๊ณ ์๋์ง ์ดํดํ๊ธฐ ์ํด tcpdump
์ ์์กดํด์ผํ์ต๋๋ค.
์ค!
Etcd ์ํ ํ์ธ์์ํ ๋ด ๋ก์ปฌ TLS ํจ์น์์๋ง ์๋ํฉ๋๋ค.
์ ๊ทธ๋ ์ด๋๋ฅผ ์๋ฃํ๋ ค๋ฉด ๋ค์์ ์ํํ์ญ์์ค.
kubeadm alpha phase controlplane all
kubeadm alpha phase upload-config
์์ ํด๊ฒฐ ๋ฐฉ๋ฒ์ ์์ ํ์ต๋๋ค.
@stealthybox ๋ ๋ฒ์งธ kubeadm ๋ช ๋ น์ด ์๋ํ์ง ์์ต๋๋ค.
# kubeadm alpha phase upload-config
The --config flag is mandatory
@renich ๋ ๊ตฌ์ฑ์ ํ์ผ ๊ฒฝ๋ก๋ฅผ ์ ๊ณตํฉ๋๋ค.
์ฌ์ฉ์ ์ง์ ์ค์ ์ ์ฌ์ฉํ์ง ์๋ ๊ฒฝ์ฐ ๋น ํ์ผ๋ก ์ ๋ฌํ ์ ์์ต๋๋ค.
๋ค์์ bash์์์ด๋ฅผ ์ํํ๋ ๊ฐ๋จํ ๋ฐฉ๋ฒ์
๋๋ค.
1.10_kubernetes/server/bin/kubeadm alpha phase upload-config --config <(echo)
์ด ๋ฌธ์ ๋ ์ด์ https://github.com/kubernetes/kubernetes/pull/62655 ์ ๋ณํฉ์ผ๋ก ํด๊ฒฐ๋์ด์ผํ๋ฉฐ v1.10.2 ๋ฆด๋ฆฌ์ค์ ์ผ๋ถ๊ฐ ๋ ๊ฒ์ ๋๋ค.
kubeadm 1.10.2๋ก 1.10.0-> 1.10.2 ์ ๊ทธ๋ ์ด๋๊ฐ ์ํํ๊ณ ์๊ฐ ์ด๊ณผ๊ฐ ์์์ ํ์ธํ ์ ์์ต๋๋ค.
1.10.0-> 1.10.2์ ์ฌ์ ํ ์๊ฐ ์ด๊ณผ๊ฐ ์์ง๋ง ๋ค๋ฅธ ์๊ฐ์ด ์์ต๋๋ค.
[upgrade/staticpods] Waiting for the kubelet to restart the component
Static pod: kube-apiserver-master hash: a273591d3207fcd9e6fd0c308cc68d64
[upgrade/apply] FATAL: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [timed out waiting for the condition]
์ด๋ป๊ฒํด์ผํ ์ง ๋ชจ๋ฅด๊ฒ ์ต๋๋ค ...
@ denis111 ์ docker ps
์ฌ์ฉํ์ฌ ์
๊ทธ๋ ์ด๋๋ฅผ ์ํํ๋ ๋์ API ์๋ฒ ๋ก๊ทธ๋ฅผ ํ์ธํฉ๋๋ค. ๋๋ ๋น์ ์ด ๋๋ ์ง๋ฉดํ๊ณ ์๋ ๋ฌธ์ ์ ์ง๋ฉดํ์ ๊ฒ ๊ฐ์ต๋๋ค.
@dvdmuckle ๊ธ์, ๊ทธ ๋ก๊ทธ์ ์ค๋ฅ๊ฐ ์์ผ๋ฉฐ I์ ๋ช W๋ก ์์ํ๋ ํญ๋ชฉ ๋ง ํ์๋ฉ๋๋ค.
๊ทธ๋ฆฌ๊ณ kube-apiserver์ ํด์๋ ์
๊ทธ๋ ์ด๋ ์ค์ ๋ณ๊ฒฝ๋์ง ์๋๋ค๊ณ ์๊ฐํฉ๋๋ค.
1.9.3์ ARM64 ํด๋ฌ์คํฐ๊ฐ ์๊ณ 1.9.7๋ก ์ฑ๊ณต์ ์ผ๋ก ์ ๋ฐ์ดํธ๋์์ง๋ง 1.9.7์์ 1.10.2๋ก ์ ๊ทธ๋ ์ด๋ํ๋ ๋ฐ ๋์ผํ ์๊ฐ ์ด๊ณผ ๋ฌธ์ ๊ฐ ๋ฐ์ํ์ต๋๋ค.
๋๋ ์ฌ์ง์ด kubeadm์ ํธ์งํ๊ณ ๋ค์ ์ปดํ์ผํ์ฌ ๋์ผํ ๊ฒฐ๊ณผ๋ก ํ์ ์์ (๋ง์ง๋ง ์ปค๋ฐ https://github.com/anguslees/kubernetes/commits/kubeadm-gusfork์ ๊ฐ์ด)์ ๋๋ ค ๋ณด์์ต๋๋ค.
$ sudo kubeadm upgrade apply v1.10.2 --force
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to change the cluster version to "v1.10.2"
[upgrade/versions] Cluster version: v1.9.7
[upgrade/versions] kubeadm version: v1.10.2-dirty
[upgrade/version] Found 1 potential version compatibility errors but skipping since the --force flag is set:
- Specified version to upgrade to "v1.10.2" is higher than the kubeadm version "v1.10.2-dirty". Upgrade kubeadm first using the tool you used to install kubeadm
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.2"...
Static pod: kube-apiserver-kubemaster1 hash: ed7578d5bf9314188dca798386bcfb0e
Static pod: kube-controller-manager-kubemaster1 hash: e0c3f578f1c547dcf9996e1d3390c10c
Static pod: kube-scheduler-kubemaster1 hash: 52e767858f52ac4aba448b1a113884ee
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-kubemaster1 hash: 413224efa82e36533ce93e30bd18e3a8
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests346927148/etcd.yaml"
[certificates] Using the existing etcd/ca certificate and key.
[certificates] Using the existing etcd/server certificate and key.
[certificates] Using the existing etcd/peer certificate and key.
[certificates] Using the existing etcd/healthcheck-client certificate and key.
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests190581659/etcd.yaml"
[upgrade/staticpods] Not waiting for pod-hash change for component "etcd"
[upgrade/etcd] Waiting for etcd to become available
[util/etcd] Waiting 30s for initial delay
[util/etcd] Attempting to get etcd status 1/10
[util/etcd] Attempt failed with error: dial tcp 127.0.0.1:2379: getsockopt: connection refused
[util/etcd] Waiting 15s until next retry
[util/etcd] Attempting to get etcd status 2/10
[util/etcd] Attempt failed with error: dial tcp 127.0.0.1:2379: getsockopt: connection refused
[util/etcd] Waiting 15s until next retry
[util/etcd] Attempting to get etcd status 3/10
[util/etcd] Attempt failed with error: dial tcp 127.0.0.1:2379: getsockopt: connection refused
[util/etcd] Waiting 15s until next retry
[util/etcd] Attempting to get etcd status 4/10
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests346927148"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests346927148/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests346927148/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests346927148/kube-scheduler.yaml"
[upgrade/staticpods] The etcd manifest will be restored if component "kube-apiserver" fails to upgrade
[certificates] Using the existing etcd/ca certificate and key.
[certificates] Using the existing apiserver-etcd-client certificate and key.
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests190581659/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/apply] FATAL: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [timed out waiting for the condition]
v1.10.2-> v1.10.2 ์ ๊ทธ๋ ์ด๋ (๋ง๋ ์ ๋ ์๋ ์์ต๋๋ค. ํ ์คํธ ์ค์ ๋๋ค ...)
Ubuntu 16.04.
๊ทธ๋ฆฌ๊ณ ์ค๋ฅ์ ํจ๊ป ์คํจํฉ๋๋ค.
kubeadm upgrade apply v1.10.2
[upgrade/apply] FATAL: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [timed out waiting for the condition]
์ด ๋ฌธ์ ๊ฐ ์ฌ์ ํ ์ถ์ ๋๋์ง ๊ถ๊ธํฉ๋๋ค ... ์ฐพ์ ์ ์์ต๋๋ค.
๋ํ timed out waiting for the condition
์ค๋ฅ๋ก ์ธํด ์
๊ทธ๋ ์ด๋๊ฐ ์ฌ์ ํ ์คํจํ๋ ๊ฒ์ผ๋ก ๋ํ๋ฌ์ต๋๋ค.
ํธ์ง : ํ ๋ก ์ ์ ํฐ์ผ https://github.com/kubernetes/kubeadm/issues/850์ผ๋ก ์ด๋ํ์ต๋๋ค. ๊ฑฐ๊ธฐ์์ ํ ๋ก ํ์ญ์์ค.
๋ค๋ฅธ ์ฌ๋์ด 1.9.x์์์ด ๋ฌธ์ ๊ฐ์๋ ๊ฒฝ์ฐ :
์ฌ์ฉ์ ์ง์ ํธ์คํธ ์ด๋ฆ์ด์๋ AWS์์๋ ๊ฒฝ์ฐ kubeadm-config configmap์ ํธ์งํ๊ณ nodeName์์ aws ๋ด๋ถ ์ด๋ฆ์ ์ค์ ํด์ผํฉ๋๋ค. ip-xx-xx-xx-xx. $ REGION.compute.internal)
kubectl -n kube-system edit cm kubeadm-config -oyaml
์ด๊ฒ์ etc ํด๋ผ์ด์ธํธ๋ฅผ http๋ก ์ค์ ํ๋ ๊ฒ ์ธ์๋. ๋๋ ๊ทธ๋ค์ด ๊ทธ๊ฒ์ ๊ณ ์ณค๋์ง ํ์ธํ๊ธฐ ์ํด ์์ง ํธ์ง ๋ฒ์ ์ ์ฌ์ฉํ์ง ์์์ต๋๋ค.
์ด๋ kubeadm์ด api์์์ด ๊ฒฝ๋ก๋ฅผ ์ฝ์ผ๋ ค๊ณ ํ๊ธฐ ๋๋ฌธ์ ๋๋ค : / api / v1 / namespaces / kube-system / pods / kube-apiserver- $ NodeName
1.10.6์์ ์๊ฐ ์ ํ์ด ์ฆ๊ฐํ๊ธฐ ๋๋ฌธ์ ๋ช ์ฃผ ์ ์ 1.9.7 ๋ฐฐํฌ๋ฅผ 1.10.6์ผ๋ก ์ฑ๊ณต์ ์ผ๋ก ์ ๋ฐ์ดํธํ์ต๋๋ค.
์ด ๋ฒ์ ์ ๋์ผํ ๋ณ๊ฒฝ ์ฌํญ์ด ์ ์ฉ๋๋ฏ๋ก .deb ํจํค์ง๊ฐ ์ค๋น๋๋ ์ฆ์ 1.11.2๋ก ์ ๊ทธ๋ ์ด๋ ํ ๊ณํ์ ๋๋ค.
๋ด ํด๋ฌ์คํฐ๋ ARM64 ๋ณด๋์์ ์จ ํ๋ ๋ฏธ์ค๋ก ์คํ๋ฉ๋๋ค.
๊ฐ์ฅ ์ ์ฉํ ๋๊ธ
์์ ํด๊ฒฐ ๋ฐฉ๋ฒ์ ์ธ์ฆ์๋ฅผ ํ์ธํ๊ณ ๊ฒ์ฌ๋ฅผ ์ฐํํ์ฌ etcd ๋ฐ apiserver ํฌ๋๋ฅผ ์ ๊ทธ๋ ์ด๋ํ๋ ๊ฒ์ ๋๋ค.
๊ตฌ์ฑ์ ํ์ธํ๊ณ ์ฌ์ฉ ์ฌ๋ก์ ๋ํ ํ๋๊ทธ๋ฅผ ์ถ๊ฐํ์ญ์์ค.