Kubeadm: 1.9.6์—์„œ 1.10.0์œผ๋กœ์˜ ์—…๊ทธ๋ ˆ์ด๋“œ๊ฐ€ ์‹œ๊ฐ„ ์ดˆ๊ณผ๋กœ ์‹คํŒจ ํ•จ

์— ๋งŒ๋“  2018๋…„ 03์›” 28์ผ  ยท  42์ฝ”๋ฉ˜ํŠธ  ยท  ์ถœ์ฒ˜: kubernetes/kubeadm

๋ฒ„๊ทธ ๋ณด๊ณ ์„œ

๋ฒ„์ „

kubeadm ๋ฒ„์ „ ( kubeadm version ) :

kubeadm ๋ฒ„์ „ : & version.Info {Major : "1", Minor : "10", GitVersion : "v1.10.0", GitCommit : "fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState : "clean", BuildDate : "2018-03-26T16 : 44 : 10Z ", GoVersion :"go1.9.3 ", ์ปดํŒŒ์ผ๋Ÿฌ :"gc ", ํ”Œ๋žซํผ :"linux / amd64 "}

ํ™˜๊ฒฝ :

  • Kubernetes ๋ฒ„์ „ ( kubectl version ) :

ํด๋ผ์ด์–ธํŠธ ๋ฒ„์ „ : version.Info {Major : "1", Minor : "9", GitVersion : "v1.9.6", GitCommit : "9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState : "clean", BuildDate : "2018-03-21T15 : 21 : 50Z ", GoVersion :"go1.9.3 ", ์ปดํŒŒ์ผ๋Ÿฌ :"gc ", ํ”Œ๋žซํผ :"linux / amd64 "}
์„œ๋ฒ„ ๋ฒ„์ „ : version.Info {Major : "1", Minor : "9", GitVersion : "v1.9.6", GitCommit : "9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState : "clean", BuildDate : "2018-03-21T15 : 13 : 31Z ", GoVersion :"go1.9.3 ", ์ปดํŒŒ์ผ๋Ÿฌ :"gc ", ํ”Œ๋žซํผ :"linux / amd64 "}

  • ํด๋ผ์šฐ๋“œ ์ œ๊ณต ์—…์ฒด ๋˜๋Š” ํ•˜๋“œ์›จ์–ด ๊ตฌ์„ฑ :

Scaleway ๋ฒ ์–ด ๋ฉ”ํƒˆ C2S

  • OS (์˜ˆ : / etc / os-release) :

Ubuntu Xenial (16.04 LTS) (GNU / Linux 4.4.122-mainline-rev1 x86_64)

  • ์ปค๋„ (์˜ˆ : uname -a ) :

Linux amd64-master-1 4.4.122-mainline-rev1 # 1 SMP Sun Mar 18 10:44:19 UTC 2018 x86_64 x86_64 x86_64 GNU / Linux

์–ด๋–ป๊ฒŒ ๋œ ๊ฑฐ์˜ˆ์š”?

1.9.6์—์„œ 1.10.0์œผ๋กœ ์—…๊ทธ๋ ˆ์ด๋“œํ•˜๋ ค๊ณ ํ•˜๋ฉด์ด ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค.

kubeadm upgrade apply v1.10.0
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to change the cluster version to "v1.10.0"
[upgrade/versions] Cluster version: v1.9.6
[upgrade/versions] kubeadm version: v1.10.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.0"...
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests411909119/etcd.yaml"
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [arm-master-1] and IPs [10.1.244.57]
[certificates] Generated etcd/healthcheck-client certificate and key.
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests180476754/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/apply] FATAL: fatal error when trying to upgrade the etcd cluster: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [timed out waiting for the condition], rolled the state back to pre-upgrade state

๋ฌด์Šจ ์ผ์ด ์ผ์–ด๋‚˜๊ธฐ๋ฅผ ๊ธฐ๋Œ€ ํ–ˆ์Šต๋‹ˆ๊นŒ?

์„ฑ๊ณต์ ์ธ ์—…๊ทธ๋ ˆ์ด๋“œ

๊ทธ๊ฒƒ์„ ์žฌํ˜„ํ•˜๋Š” ๋ฐฉ๋ฒ• (๊ฐ€๋Šฅํ•œ ํ•œ ์ตœ์†Œํ•œ์œผ๋กœ ์ •ํ™•ํ•˜๊ฒŒ)?

1.9.6 ํŒจํ‚ค์ง€๋ฅผ ์„ค์น˜ํ•˜๊ณ  1.9.6 ํด๋Ÿฌ์Šคํ„ฐ๋ฅผ ์ดˆ๊ธฐํ™”ํ•ฉ๋‹ˆ๋‹ค.

curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list
apt-get update -qq
apt-get install -qy kubectl=1.9.6-00
apt-get install -qy kubelet=1.9.6-00
apt-get install -qy kubeadm=1.9.6-00

kubeadm-config๋ฅผ ํŽธ์ง‘ํ•˜๊ณ  https://github.com/kubernetes/kubernetes/issues/61764 ์—๋ณด๊ณ  ๋œ๋Œ€๋กœ featureGates๋ฅผ ๋ฌธ์ž์—ด์—์„œ ๋งคํ•‘์œผ๋กœ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค.

kubectl -n kube-system edit cm kubeadm-config

....
featureGates: {}
....

kubeadm 1.10.0์„ ๋‹ค์šด๋กœ๋“œํ•˜๊ณ  kubeadm upgrade plan ๋ฐ kubeadm upgrade apply v1.10.0 ํ•ฉ๋‹ˆ๋‹ค.

kinbug prioritcritical-urgent triaged

๊ฐ€์žฅ ์œ ์šฉํ•œ ๋Œ“๊ธ€

์ž„์‹œ ํ•ด๊ฒฐ ๋ฐฉ๋ฒ•์€ ์ธ์ฆ์„œ๋ฅผ ํ™•์ธํ•˜๊ณ  ๊ฒ€์‚ฌ๋ฅผ ์šฐํšŒํ•˜์—ฌ etcd ๋ฐ apiserver ํฌ๋“œ๋ฅผ ์—…๊ทธ๋ ˆ์ด๋“œํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.

๊ตฌ์„ฑ์„ ํ™•์ธํ•˜๊ณ  ์‚ฌ์šฉ ์‚ฌ๋ก€์— ๋Œ€ํ•œ ํ”Œ๋ž˜๊ทธ๋ฅผ ์ถ”๊ฐ€ํ•˜์‹ญ์‹œ์˜ค.

kubectl -n kube-system edit cm kubeadm-config  # change featureFlags
...
  featureGates: {}
...
kubeadm alpha phase certs all
kubeadm alpha phase etcd local
kubeadm alpha phase controlplane all
kubeadm alpha phase upload-config

๋ชจ๋“  42 ๋Œ“๊ธ€

์ด ๋ฒ„๊ทธ๋ฅผ ๋กœ์ปฌ์—์„œ ์žฌํ˜„ํ•˜๋Š” ์ค‘์ž…๋‹ˆ๋‹ค.

์ด๊ฒƒ์„ 10 ๋ฒˆ ์žฌ ์‹œ๋„ํ•œ ํ›„ ๋งˆ์นจ๋‚ด ์ž‘๋™ํ–ˆ์Šต๋‹ˆ๋‹ค.

๋‹ค์Œ์€ ๋‚ด etcd ๋งค๋‹ˆํŽ˜์ŠคํŠธ diff์ž…๋‹ˆ๋‹ค.
``` root @ vagrant : ~ # diff /etc/kubernetes/manifests/etcd.yaml /etc/kubernetes/tmp/kubeadm-backup-manifests858209931/etcd.yaml
16,17c16,17
<---listen-client-urls = https://127.0.0.1 : 2379

<---advertise-client-urls = https://127.0.0.1 : 2379

- --listen-client-urls=http://127.0.0.1:2379
- --advertise-client-urls=http://127.0.0.1:2379

19,27c19
<---key-file = / etc / kubernetes / pki / etcd / server.key
<---trusted-ca-file = / etc / kubernetes / pki / etcd / ca.crt
<---peer-cert-file = / etc / kubernetes / pki / etcd / peer.crt
<---peer-key-file = / etc / kubernetes / pki / etcd / peer.key
<---client-cert-auth = true
<---peer-client-cert-auth = true
<---cert-file = / etc / kubernetes / pki / etcd / server.crt
<---peer-trusted-ca-file = / etc / kubernetes / pki / etcd / ca.crt

<์ด๋ฏธ์ง€ : gcr.io/google_containers/etcd-amd64:3.1.12

image: gcr.io/google_containers/etcd-amd64:3.1.11

29,35d20
<์ž„์› :
<๋ช…๋ น :
<-/ bin / sh
<--ec
<-ETCDCTL_API = 3 etcdctl --endpoints = 127.0.0.1 : 2379 --cacert = / etc / kubernetes / pki / etcd / ca.crt
<--cert = / etc / kubernetes / pki / etcd / healthcheck-client.crt --key = / etc / kubernetes / pki / etcd / healthcheck-client.key
<get foo
36a22,26
httpGet :
ํ˜ธ์ŠคํŠธ : 127.0.0.1
๊ฒฝ๋กœ : / health
ํฌํŠธ : 2379
์ฒด๊ณ„ : HTTP
43,45c33
<์ด๋ฆ„ : etcd-data
<-mountPath : / etc / kubernetes / pki / etcd

<์ด๋ฆ„ : etcd-certs

  name: etcd

51,55c39
<์ด๋ฆ„ : etcd-data
<-ํ˜ธ์ŠคํŠธ ๊ฒฝ๋กœ :
<๊ฒฝ๋กœ : / etc / kubernetes / pki / etcd
<์œ ํ˜• : DirectoryOrCreate

<์ด๋ฆ„ : etcd-certs

name: etcd

root @ vagrant : ~ # ls / etc / kubernetes / pki / etcd
ca.crt ca.key healthcheck-client.crt healthcheck-client.key peer.crt peer.key server.crt server.key

1.9.6 Ubuntu 17.10 Vagrant์˜ ํด๋Ÿฌ์Šคํ„ฐ :

root<strong i="6">@vagrant</strong>:/vagrant# 1.10_kubernetes/server/bin/kubeadm upgrade apply v1.10.0
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to change the cluster version to "v1.10.0"
[upgrade/versions] Cluster version: v1.9.6
[upgrade/versions] kubeadm version: v1.10.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.0"...
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests262738652/etcd.yaml"
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [vagrant] and IPs [10.0.2.15]
[certificates] Generated etcd/healthcheck-client certificate and key.
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests858209931/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[apiclient] Error getting Pods with label selector "component=etcd" [the server was unable to return a response in the time allotted, but may still be processing the request (get pods)]
[apiclient] Error getting Pods with label selector "component=etcd" [Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Detcd: http2: server sent GOAWAY and closed the connection; LastStreamID=27, ErrCode=NO_ERROR, debug=""]
[apiclient] Error getting Pods with label selector "component=etcd" [Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Detcd: net/http: TLS handshake timeout]
[apiclient] Error getting Pods with label selector "component=etcd" [the server was unable to return a response in the time allotted, but may still be processing the request (get pods)]
[apiclient] Error getting Pods with label selector "component=etcd" [Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Detcd: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug=""]
[upgrade/apply] FATAL: fatal error when trying to upgrade the etcd cluster: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [timed out waiting for the condition], rolled the state back to pre-upgrade state

์ด๊ฒƒ์€ ๋‚ด ์žฌํ˜„ ํ™˜๊ฒฝ์ž…๋‹ˆ๋‹ค : https://github.com/stealthybox/vagrant-kubeadm-testing

๋ถ€ํŠธ ์ŠคํŠธ๋žฉ์— ๋Œ€ํ•ด ๋‹ค์Œ ์ค„์„ 1.9.6-00 ๋กœ ๋ณ€๊ฒฝํ•ฉ๋‹ˆ๋‹ค. https://github.com/stealthybox/vagrant-kubeadm-testing/blob/9d4493e990c9bd742107b317641267c3ef3640cd/Vagrantfile#L18 -L20

๊ทธ๋Ÿฐ ๋‹ค์Œ 1.10 ์„œ๋ฒ„ ๋ฐ”์ด๋„ˆ๋ฆฌ๋ฅผ ์ €์žฅ์†Œ๋กœ ๋‹ค์šด๋กœ๋“œํ•˜๋ฉด ๊ฒŒ์ŠคํŠธ์—์„œ /vagrant ์—์„œ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#server -binaries

kubelet etcd ๊ด€๋ จ ๋กœ๊ทธ :

root<strong i="6">@vagrant</strong>:~# journalctl -xefu kubelet | grep -i etcd
Mar 28 16:32:07 vagrant kubelet[14676]: W0328 16:32:07.808776   14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(7278f85057e8bf5cb81c9f96d3b25320)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:32:07 vagrant kubelet[14676]: I0328 16:32:07.880412   14676 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd" (UniqueName: "kubernetes.io/host-path/7278f85057e8bf5cb81c9f96d3b25320-etcd") pod "etcd-vagrant" (UID: "7278f85057e8bf5cb81c9f96d3b25320")
Mar 28 16:34:27 vagrant kubelet[14676]: W0328 16:34:27.472534   14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(7278f85057e8bf5cb81c9f96d3b25320)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:57:33 vagrant kubelet[14676]: W0328 16:57:33.683648   14676 kubelet.go:1597] Deleting mirror pod "etcd-vagrant_kube-system(122348c3-32a6-11e8-8dc5-080027d6be16)" because it is outdated
Mar 28 16:57:33 vagrant kubelet[14676]: I0328 16:57:33.725564   14676 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-certs") pod "etcd-vagrant" (UID: "37936d2107e31b457cada6c2433469f1")
Mar 28 16:57:33 vagrant kubelet[14676]: I0328 16:57:33.725637   14676 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-data") pod "etcd-vagrant" (UID: "37936d2107e31b457cada6c2433469f1")
Mar 28 16:57:35 vagrant kubelet[14676]: E0328 16:57:35.484901   14676 kuberuntime_container.go:66] Can't make a ref to pod "etcd-vagrant_kube-system(7278f85057e8bf5cb81c9f96d3b25320)", container etcd: selfLink was empty, can't make reference
Mar 28 16:57:35 vagrant kubelet[14676]: I0328 16:57:35.889458   14676 reconciler.go:191] operationExecutor.UnmountVolume started for volume "etcd" (UniqueName: "kubernetes.io/host-path/7278f85057e8bf5cb81c9f96d3b25320-etcd") pod "7278f85057e8bf5cb81c9f96d3b25320" (UID: "7278f85057e8bf5cb81c9f96d3b25320")
Mar 28 16:57:35 vagrant kubelet[14676]: I0328 16:57:35.889595   14676 operation_generator.go:643] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7278f85057e8bf5cb81c9f96d3b25320-etcd" (OuterVolumeSpecName: "etcd") pod "7278f85057e8bf5cb81c9f96d3b25320" (UID: "7278f85057e8bf5cb81c9f96d3b25320"). InnerVolumeSpecName "etcd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 28 16:57:35 vagrant kubelet[14676]: I0328 16:57:35.989892   14676 reconciler.go:297] Volume detached for volume "etcd" (UniqueName: "kubernetes.io/host-path/7278f85057e8bf5cb81c9f96d3b25320-etcd") on node "vagrant" DevicePath ""
Mar 28 16:58:03 vagrant kubelet[14676]: E0328 16:58:03.688878   14676 mirror_client.go:88] Failed deleting a mirror pod "etcd-vagrant_kube-system": Timeout: request did not complete within allowed duration
Mar 28 16:58:03 vagrant kubelet[14676]: E0328 16:58:03.841447   14676 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-vagrant.152023ff626cfbc5", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-vagrant", UID:"37936d2107e31b457cada6c2433469f1", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"SuccessfulMountVolume", Message:"MountVolume.SetUp succeeded for volume \"etcd-certs\" ", Source:v1.EventSource{Component:"kubelet", Host:"vagrant"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f713e59c5, ext:1534226953099, loc:(*time.Location)(0x5859e60)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f713e59c5, ext:1534226953099, loc:(*time.Location)(0x5859e60)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Timeout: request did not complete within allowed duration' (will not retry!)
Mar 28 16:58:33 vagrant kubelet[14676]: E0328 16:58:33.844276   14676 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-vagrant.152023ff626cfb82", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-vagrant", UID:"37936d2107e31b457cada6c2433469f1", APIVersion:"v1", ResourceVersion:"", FieldPath:""}, Reason:"SuccessfulMountVolume", Message:"MountVolume.SetUp succeeded for volume \"etcd-data\" ", Source:v1.EventSource{Component:"kubelet", Host:"vagrant"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f713e5982, ext:1534226953033, loc:(*time.Location)(0x5859e60)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f713e5982, ext:1534226953033, loc:(*time.Location)(0x5859e60)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Timeout: request did not complete within allowed duration' (will not retry!)
Mar 28 16:59:03 vagrant kubelet[14676]: E0328 16:59:03.692450   14676 kubelet.go:1612] Failed creating a mirror pod for "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": the server was unable to return a response in the time allotted, but may still be processing the request (post pods)
Mar 28 16:59:03 vagrant kubelet[14676]: E0328 16:59:03.848007   14676 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"etcd-vagrant.152023ff641f915f", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"etcd-vagrant", UID:"7278f85057e8bf5cb81c9f96d3b25320", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{etcd}"}, Reason:"Killing", Message:"Killing container with id docker://etcd:Need to kill Pod", Source:v1.EventSource{Component:"kubelet", Host:"vagrant"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f72f0ef5f, ext:1534255433999, loc:(*time.Location)(0x5859e60)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbea7103f72f0ef5f, ext:1534255433999, loc:(*time.Location)(0x5859e60)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Timeout: request did not complete within allowed duration' (will not retry!)
Mar 28 16:59:14 vagrant kubelet[14676]: W0328 16:59:14.472661   14676 kubelet.go:1597] Deleting mirror pod "etcd-vagrant_kube-system(122348c3-32a6-11e8-8dc5-080027d6be16)" because it is outdated
Mar 28 16:59:14 vagrant kubelet[14676]: W0328 16:59:14.473138   14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:59:14 vagrant kubelet[14676]: E0328 16:59:14.473190   14676 mirror_client.go:88] Failed deleting a mirror pod "etcd-vagrant_kube-system": Delete https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:59:14 vagrant kubelet[14676]: E0328 16:59:14.473658   14676 kubelet.go:1612] Failed creating a mirror pod for "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Post https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:59:15 vagrant kubelet[14676]: W0328 16:59:15.481336   14676 kubelet.go:1597] Deleting mirror pod "etcd-vagrant_kube-system(122348c3-32a6-11e8-8dc5-080027d6be16)" because it is outdated
Mar 28 16:59:15 vagrant kubelet[14676]: E0328 16:59:15.483705   14676 mirror_client.go:88] Failed deleting a mirror pod "etcd-vagrant_kube-system": Delete https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 16:59:15 vagrant kubelet[14676]: E0328 16:59:15.497391   14676 kubelet.go:1612] Failed creating a mirror pod for "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Post https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 17:00:34 vagrant kubelet[14676]: W0328 17:00:34.475851   14676 kubelet.go:1597] Deleting mirror pod "etcd-vagrant_kube-system(122348c3-32a6-11e8-8dc5-080027d6be16)" because it is outdated
Mar 28 17:01:07 vagrant kubelet[14676]: W0328 17:01:07.720076   14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: http2: server sent GOAWAY and closed the connection; LastStreamID=47, ErrCode=NO_ERROR, debug=""
Mar 28 17:01:07 vagrant kubelet[14676]: E0328 17:01:07.720107   14676 mirror_client.go:88] Failed deleting a mirror pod "etcd-vagrant_kube-system": Delete https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: http2: server sent GOAWAY and closed the connection; LastStreamID=47, ErrCode=NO_ERROR, debug=""; some request body already written
Mar 28 17:01:07 vagrant kubelet[14676]: E0328 17:01:07.725335   14676 kubelet.go:1612] Failed creating a mirror pod for "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Post https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 17:01:07 vagrant kubelet[14676]: I0328 17:01:07.728709   14676 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd" (UniqueName: "kubernetes.io/host-path/7278f85057e8bf5cb81c9f96d3b25320-etcd") pod "etcd-vagrant" (UID: "7278f85057e8bf5cb81c9f96d3b25320")
Mar 28 17:01:07 vagrant kubelet[14676]: W0328 17:01:07.734475   14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 17:01:07 vagrant kubelet[14676]: W0328 17:01:07.740642   14676 status_manager.go:459] Failed to get status for pod "etcd-vagrant_kube-system(7278f85057e8bf5cb81c9f96d3b25320)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-vagrant: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Mar 28 17:01:09 vagrant kubelet[14676]: E0328 17:01:09.484412   14676 kuberuntime_container.go:66] Can't make a ref to pod "etcd-vagrant_kube-system(37936d2107e31b457cada6c2433469f1)", container etcd: selfLink was empty, can't make reference
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.848794   14676 reconciler.go:191] operationExecutor.UnmountVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-certs") pod "37936d2107e31b457cada6c2433469f1" (UID: "37936d2107e31b457cada6c2433469f1")
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.849282   14676 reconciler.go:191] operationExecutor.UnmountVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-data") pod "37936d2107e31b457cada6c2433469f1" (UID: "37936d2107e31b457cada6c2433469f1")
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.849571   14676 operation_generator.go:643] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-data" (OuterVolumeSpecName: "etcd-data") pod "37936d2107e31b457cada6c2433469f1" (UID: "37936d2107e31b457cada6c2433469f1"). InnerVolumeSpecName "etcd-data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.849503   14676 operation_generator.go:643] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-certs" (OuterVolumeSpecName: "etcd-certs") pod "37936d2107e31b457cada6c2433469f1" (UID: "37936d2107e31b457cada6c2433469f1"). InnerVolumeSpecName "etcd-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.949925   14676 reconciler.go:297] Volume detached for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-certs") on node "vagrant" DevicePath ""
Mar 28 17:01:09 vagrant kubelet[14676]: I0328 17:01:09.949975   14676 reconciler.go:297] Volume detached for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/37936d2107e31b457cada6c2433469f1-etcd-data") on node "vagrant" DevicePath ""

ํ˜„์žฌ ํ•ด๊ฒฐ ๋ฐฉ๋ฒ•์€ ์—…๊ทธ๋ ˆ์ด๋“œ๋ฅผ ๊ณ„์† ์žฌ ์‹œ๋„ํ•˜๋Š” ๊ฒƒ์ด๋ฉฐ ์–ด๋Š ์‹œ์ ์—์„œ ์„ฑ๊ณตํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค.

@stealthybox etcd ์ปจํ…Œ์ด๋„ˆ์˜ ๋„์ปค์—์„œ ๋กœ๊ทธ๋ฅผ ๊ฐ€์ ธ ์˜ค์…จ์Šต๋‹ˆ๊นŒ? ๋˜ํ•œ grep -i etcd ์€ (๋Š”) ์ผ๋ถ€ kubelet ์ถœ๋ ฅ์„ ๋งˆ์Šคํ‚น ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค (์˜ˆ : ์ปจํ…Œ์ด๋„ˆ ์ด๋ฆ„์ด ์—†์ง€๋งŒ ๊ด€๋ จ์„ฑ์ด์žˆ๋Š” ์ผ๋ถ€ ์˜ค๋ฅ˜ ๋ฉ”์‹œ์ง€).

์ด ๋ฒ„๊ทธ์™€ ๊ด€๋ จ๋œ ๋˜ ๋‹ค๋ฅธ ์ด์ƒํ•œ ์‚ฌ๊ฑด์ด ๋ฐœ์ƒํ–ˆ์Šต๋‹ˆ๋‹ค. kubeadm ์—…๊ทธ๋ ˆ์ด๋“œ๋Š” ์ƒˆ etcd ์ด๋ฏธ์ง€๋ฅผ ๊ฐ€์ ธ์˜ค๊ณ  ์ƒˆ ์ •์  ํฌ๋“œ๋ฅผ ๋ฐฐํฌํ•˜๊ธฐ ์ „์— etcd ์—…๊ทธ๋ ˆ์ด๋“œ๋ฅผ ์™„๋ฃŒ๋œ ๊ฒƒ์œผ๋กœ ํ‘œ์‹œํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด๋กœ ์ธํ•ด ์ดํ›„ ๋‹จ๊ณ„์—์„œ ์—…๊ทธ๋ ˆ์ด๋“œ ์‹œ๊ฐ„์ด ์ดˆ๊ณผ๋˜๊ณ  ์—…๊ทธ๋ ˆ์ด๋“œ ๋กค๋ฐฑ์ด ์‹คํŒจํ•ฉ๋‹ˆ๋‹ค. ์ด๋กœ ์ธํ•ด ํด๋Ÿฌ์Šคํ„ฐ๊ฐ€ ์ค‘๋‹จ ๋œ ์ƒํƒœ๊ฐ€๋ฉ๋‹ˆ๋‹ค. ํด๋Ÿฌ์Šคํ„ฐ๋ฅผ ๋ณต๊ตฌํ•˜๋ ค๋ฉด ์›๋ž˜ etcd ์ •์  ํฌ๋“œ ๋งค๋‹ˆํŽ˜์ŠคํŠธ๋ฅผ ๋ณต์›ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค.

์˜ค ๊ทธ๋ž˜ ๋‚˜๋„ ๊ฑฐ๊ธฐ์— ๊ฐ‡ํ˜€์žˆ์–ด. ๋‚ด ํด๋Ÿฌ์Šคํ„ฐ๊ฐ€ ์™„์ „ํžˆ ๋‹ค์šด๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋ˆ„๊ตฐ๊ฐ€๊ฐ€์ด ์ฃผ์—์„œ ๊ตฌ์ถœํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ์ง€์นจ์„ ๊ณต์œ  ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๊นŒ?

@detiber๊ฐ€ ์„ค๋ช…ํ•œ ๊ฒƒ์ฒ˜๋Ÿผ ๋‘ ๋ฒˆ์งธ ์—…๊ทธ๋ ˆ์ด๋“œ ์‹œ๋„์—

/ etc / kubernetes / tmp์—์„œ ๋ฐฑ์—… ๋œ ๋‚ด์šฉ์„ ์ฐพ์•˜๊ณ , etcd๊ฐ€ ๋ฒ”์ธ ์ผ ์ˆ˜ ์žˆ๋‹ค๊ณ  ์ƒ๊ฐํ•˜๊ณ , ์ด์ „ ๋งค๋‹ˆํŽ˜์ŠคํŠธ๋ฅผ ๋งค๋‹ˆํŽ˜์ŠคํŠธ ํด๋”์˜ ์ƒˆ ๋งค๋‹ˆํŽ˜์ŠคํŠธ ์œ„์— ๋ณต์‚ฌํ–ˆ์Šต๋‹ˆ๋‹ค. ๊ทธ ์‹œ์ ์—์„œ ๋‚˜๋Š” ์žƒ์„ ๊ฒƒ์ด ์—†์—ˆ์Šต๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ํด๋Ÿฌ์Šคํ„ฐ์— ๋Œ€ํ•œ ํ†ต์ œ๋ ฅ์„ ์™„์ „ํžˆ ์žƒ์—ˆ ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์ •ํ™•ํžˆ ๊ธฐ์–ต์ด ๋‚˜์ง€ ์•Š์ง€๋งŒ ์ „์ฒด ์‹œ์Šคํ…œ์„ ๋‹ค์‹œ ์‹œ์ž‘ํ•˜๊ณ  ๋‚˜์ค‘์— ๋ชจ๋“  ํ•ญ๋ชฉ์„ v1.9.6์œผ๋กœ ๋‹ค์šด ๊ทธ๋ ˆ์ด๋“œ ํ•œ ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค. ๊ฒฐ๊ตญ ๋‚˜๋Š” ํด๋Ÿฌ์Šคํ„ฐ์— ๋Œ€ํ•œ ์ œ์–ด๊ถŒ์„ ์–ป์—ˆ๊ณ  v1.10.0์„ ๋‹ค์‹œ ์—‰๋ง์œผ๋กœ ๋งŒ๋“ค ๋™๊ธฐ๋ฅผ ์žƒ์—ˆ์Šต๋‹ˆ๋‹ค. ์ „ํ˜€ ์žฌ๋ฏธ ์—†์—ˆ์–ด์š” ...

/etc/kubernetes/tmp ์—์„œ etcd ์ •์  ํฌ๋“œ ๋งค๋‹ˆํŽ˜์ŠคํŠธ๋ฅผ ๋กค๋ฐฑํ•˜๋Š” ๊ฒฝ์šฐ 1.10์˜ ์ƒˆ๋กœ์šด TLS ๊ตฌ์„ฑ์œผ๋กœ ์ธํ•ด apiserver ๋งค๋‹ˆํŽ˜์ŠคํŠธ๋ฅผ 1.9 ๋ฒ„์ „์œผ๋กœ ๋กค๋ฐฑํ•˜๋Š” ๊ฒƒ๋„ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค.

^ etcd ์—…๊ทธ๋ ˆ์ด๋“œ๊ฐ€ ๋‚˜๋จธ์ง€ ์ปจํŠธ๋กค ํ”Œ๋ ˆ์ธ ์—…๊ทธ๋ ˆ์ด๋“œ๋ฅผ ์ฐจ๋‹จํ•œ๋‹ค๊ณ  ๋ฏฟ๊ธฐ ๋•Œ๋ฌธ์— ์•„๋งˆ๋„ ์ด๊ฒƒ์„ ํ•  ํ•„์š”๊ฐ€ ์—†์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค.

etcd ๋งค๋‹ˆํŽ˜์ŠคํŠธ ๋งŒ ์—…๊ทธ๋ ˆ์ด๋“œ ์‹คํŒจ์‹œ ๋กค๋ฐฑ๋˜์ง€ ์•Š๋Š” ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ ๋ชจ๋“  ๊ฒƒ์€ ๊ดœ์ฐฎ์Šต๋‹ˆ๋‹ค. ๋ฐฑ์—… ๋งค๋‹ˆํŽ˜์ŠคํŠธ๋ฅผ ์ด๋™ํ•˜๊ณ  kubelet์„ ๋‹ค์‹œ ์‹œ์ž‘ํ•˜๋ฉด ๋ชจ๋“  ๊ฒƒ์ด ์ •์ƒ์ ์œผ๋กœ ๋Œ์•„์˜ต๋‹ˆ๋‹ค.

๋‚˜๋Š” ๊ฐ™์€ ์‹œ๊ฐ„ ์ดˆ๊ณผ ๋ฌธ์ œ์— ์ง๋ฉดํ–ˆ๊ณ  kubeadm์€ kube-apiserv ๋งค๋‹ˆํŽ˜์ŠคํŠธ๋ฅผ 1.9.6์œผ๋กœ ๋กค๋ฐฑํ–ˆ์ง€๋งŒ etcd ๋งค๋‹ˆํŽ˜์ŠคํŠธ๋ฅผ ๊ทธ๋Œ€๋กœ ๋‘์—ˆ์Šต๋‹ˆ๋‹ค (์ฝ๊ธฐ : TLS ์‚ฌ์šฉ). ๋ถ„๋ช…ํžˆ apiserv๊ฐ€ ๋น„์ฐธํ•˜๊ฒŒ ์‹คํŒจํ•˜์—ฌ ๋‚ด ๋งˆ์Šคํ„ฐ ๋…ธ๋“œ๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ์†์ƒ์‹œํ‚ต๋‹ˆ๋‹ค. ๋ณ„๋„์˜ ๋ฌธ์ œ ๋ณด๊ณ ์„œ๋ฅผ ์ž‘์„ฑํ•˜๊ธฐ์— ์ข‹์€ ํ›„๋ณด๋ผ๊ณ  ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค.

@dvdmuckle @codepainters , ์•ˆํƒ€๊น๊ฒŒ๋„ ๋กค๋ฐฑ ์„ฑ๊ณต ์—ฌ๋ถ€๋Š” ๊ฒฝ์Ÿ ์กฐ๊ฑด (etcd ๋˜๋Š” api ์„œ๋ฒ„)์— ๋งž๋Š” ๊ตฌ์„ฑ ์š”์†Œ์— ๋”ฐ๋ผ ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ๊ฒฝ์Ÿ ์กฐ๊ฑด์— ๋Œ€ํ•œ ์ˆ˜์ • ์‚ฌํ•ญ์„ ์ฐพ์•˜์ง€๋งŒ kubeadm ์—…๊ทธ๋ ˆ์ด๋“œ๋ฅผ ์™„์ „ํžˆ ์ค‘๋‹จํ•ฉ๋‹ˆ๋‹ค. ์ €๋Š” @stealthybox ์™€ ํ˜‘๋ ฅํ•˜์—ฌ ์—…๊ทธ๋ ˆ์ด๋“œ๋ฅผ ์ ์ ˆํžˆ ์ˆ˜์ •ํ•˜๊ธฐ์œ„ํ•œ ์ ์ ˆํ•œ ๊ฒฝ๋กœ๋ฅผ ์ฐพ์œผ๋ ค๊ณ  ๋…ธ๋ ฅํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.

@codepainters ๊ฐ™์€ ๋ฌธ์ œ๋ผ๊ณ  ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค.

์ด ๋ฌธ์ œ๋ฅผ ์ผ์œผํ‚ค๋Š” ๋ช‡ ๊ฐ€์ง€ ๊ทผ๋ณธ์ ์ธ ๋ฌธ์ œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค.

  • ์—…๊ทธ๋ ˆ์ด๋“œ๋Š” API์—์„œ ๋ฏธ๋Ÿฌ ํฌ๋“œ๋ฅผ ์ฟผ๋ฆฌ ํ•œ ๊ฒฐ๊ณผ์—์„œ ๊ฐ ๊ตฌ์„ฑ ์š”์†Œ์— ๋Œ€ํ•œ ๋ฏธ๋Ÿฌ ํฌ๋“œ์˜ ํ•ด์‹œ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ ์—…๊ทธ๋ ˆ์ด๋“œ๋Š”์ด ํ•ด์‹œ ๊ฐ’์ด ๋ณ€๊ฒฝ๋˜๋Š”์ง€ ํ…Œ์ŠคํŠธํ•˜์—ฌ ํฌ๋“œ๊ฐ€ ์ •์  ๋งค๋‹ˆํŽ˜์ŠคํŠธ ๋ณ€๊ฒฝ์—์„œ ์—…๋ฐ์ดํŠธ๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ํ•ด์‹œ ๋œ ๊ฐ’์—๋Š” ์ •์  ๋งค๋‹ˆํŽ˜์ŠคํŠธ ๋ณ€๊ฒฝ (์˜ˆ : ํฌ๋“œ ์ƒํƒœ ์—…๋ฐ์ดํŠธ) ์ด์™ธ์˜ ์ด์œ ๋กœ ๋ณ€๊ฒฝ ๋  ์ˆ˜์žˆ๋Š” ํ•„๋“œ๊ฐ€ ํฌํ•จ๋ฉ๋‹ˆ๋‹ค. ํ•ด์‹œ ๋น„๊ต ์‚ฌ์ด์—์„œ ํฌ๋“œ ์ƒํƒœ๊ฐ€ ๋ณ€๊ฒฝ๋˜๋ฉด ์—…๊ทธ๋ ˆ์ด๋“œ๊ฐ€ ์กฐ๊ธฐ์— ๋‹ค์Œ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ๊ณ„์†๋ฉ๋‹ˆ๋‹ค.
  • ์—…๊ทธ๋ ˆ์ด๋“œ๋Š” etcd ์ •์  ํฌ๋“œ ๋งค๋‹ˆํŽ˜์ŠคํŠธ ์—…๋ฐ์ดํŠธ (etcd์— tls ๋ณด์•ˆ ์ถ”๊ฐ€ ํฌํ•จ)๋ฅผ ์ˆ˜ํ–‰ํ•˜๊ณ  apiserver๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํฌ๋“œ๊ฐ€ ์—…๋ฐ์ดํŠธ๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•˜๋ ค๊ณ  ์‹œ๋„ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ apiserver ๋งค๋‹ˆํŽ˜์ŠคํŠธ๋Š”์ด ์‹œ์ ์—์„œ etcd์™€ ํ†ต์‹ ํ•˜๊ธฐ ์œ„ํ•ด tls๋ฅผ ์‚ฌ์šฉํ•˜๋„๋ก ์—…๋ฐ์ดํŠธ๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. .

๊ฒฐ๊ณผ์ ์œผ๋กœ ํ˜„์žฌ ์—…๊ทธ๋ ˆ์ด๋“œ๋Š” kubelet์ด etcd์— ๋Œ€ํ•œ ์ƒˆ ์ •์  ๋งค๋‹ˆํŽ˜์ŠคํŠธ๋ฅผ ์„ ํƒํ•˜๊ธฐ ์ „์— ํ•ด์‹œ๋ฅผ ๋ณ€๊ฒฝํ•˜๋Š” etcd ํฌ๋“œ์— ๋Œ€ํ•œ ํฌ๋“œ ์ƒํƒœ ์—…๋ฐ์ดํŠธ๊ฐ€์žˆ๋Š” ๊ฒฝ์šฐ์—๋งŒ ์„ฑ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ apiserver ๋งค๋‹ˆํŽ˜์ŠคํŠธ๋ฅผ ์—…๋ฐ์ดํŠธํ•˜๊ธฐ ์ „์— ์—…๊ทธ๋ ˆ์ด๋“œ ๋„๊ตฌ๊ฐ€ api๋ฅผ ์ฟผ๋ฆฌ ํ•  ๋•Œ api ์„œ๋ฒ„๋ฅผ apiserver ์—…๊ทธ๋ ˆ์ด๋“œ์˜ ์ฒซ ๋ฒˆ์งธ ๋ถ€๋ถ„์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์–ด์•ผํ•ฉ๋‹ˆ๋‹ค.

@detiber ์™€ ์ €๋Š” ์—…๊ทธ๋ ˆ์ด๋“œ ํ”„๋กœ์„ธ์Šค์— ํ•„์š”ํ•œ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์— ๋Œ€ํ•ด ๋…ผ์˜ํ•˜๊ธฐ ์œ„ํ•ด ์ „ํ™”๋ฅผ ๋ฐ›์•˜์Šต๋‹ˆ๋‹ค.
1.10.x ํŒจ์น˜ ๋ฆด๋ฆฌ์Šค์—์„œ์ด ๋ฒ„๊ทธ์— ๋Œ€ํ•œ 3 ๊ฐ€์ง€ ์ˆ˜์ • ์‚ฌํ•ญ์„ ๊ตฌํ˜„ํ•  ๊ณ„ํš์ž…๋‹ˆ๋‹ค.

  • ์—…๊ทธ๋ ˆ์ด๋“œ์—์„œ etcd TLS๋ฅผ ์ œ๊ฑฐํ•˜์‹ญ์‹œ์˜ค.
    ํ˜„์žฌ ์—…๊ทธ๋ ˆ์ด๋“œ ๋ฃจํ”„๋Š” ๊ตฌ์„ฑ ์š”์†Œ๋ณ„๋กœ ์ผ๊ด„ ์ˆ˜์ •์„ ์ง๋ ฌ ๋ฐฉ์‹์œผ๋กœ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค.
    ๊ตฌ์„ฑ ์š”์†Œ ์—…๊ทธ๋ ˆ์ด๋“œ๋Š” ์ข…์† ๊ตฌ์„ฑ ์š”์†Œ ๊ตฌ์„ฑ์— ๋Œ€ํ•œ ์ง€์‹์ด ์—†์Šต๋‹ˆ๋‹ค.
    ์—…๊ทธ๋ ˆ์ด๋“œ๋ฅผ ํ™•์ธํ•˜๋ ค๋ฉด ํฌ๋“œ ์ƒํƒœ๋ฅผ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด APIServer๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์–ด์•ผํ•ฉ๋‹ˆ๋‹ค.
    Etcd TLS์—๋Š”์ด ๊ณ„์•ฝ์„ ์œ„๋ฐ˜ํ•˜๋Š” ๊ฒฐํ•ฉ ๋œ etcd + apiserver ๊ตฌ์„ฑ ๋ณ€๊ฒฝ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค.
    ์ด๊ฒƒ์€์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ์œ„ํ•œ ์ตœ์†Œํ•œ์˜ ์‹คํ–‰ ๊ฐ€๋Šฅํ•œ ๋ณ€๊ฒฝ์ด๋ฉฐ ์—…๊ทธ๋ ˆ์ด๋“œ ๋œ ํด๋Ÿฌ์Šคํ„ฐ๋Š” ์•ˆ์ „ํ•˜์ง€ ์•Š์€ etcd๋กœ ๋‚จ๊ฒจ ๋‘ก๋‹ˆ๋‹ค.

  • ํฌ๋“œ ์ƒํƒœ ๋ณ€๊ฒฝ์‹œ ๋ฏธ๋Ÿฌ ํฌ๋“œ ํ•ด์‹œ ๊ฒฝ์Ÿ ์กฐ๊ฑด์„ ์ˆ˜์ •ํ•ฉ๋‹ˆ๋‹ค.
    https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/phases/upgrade/staticpods.go#L189.
    ์ด์ œ etcd์™€ apiserver ํ”Œ๋ž˜๊ทธ ๊ฐ„์˜ ํ˜ธํ™˜์„ฑ์„ ๊ฐ€์ •ํ•˜๋ฉด ์—…๊ทธ๋ ˆ์ด๋“œ๊ฐ€ ์˜ฌ ๋ฐ”๋ฆ…๋‹ˆ๋‹ค.

  • ํŠน๋ณ„ํžˆ ๋ณ„๋„์˜ ๋‹จ๊ณ„์—์„œ TLS๋ฅผ ์—…๊ทธ๋ ˆ์ด๋“œํ•˜์‹ญ์‹œ์˜ค.
    Etcd์™€ APIServer๋Š” ํ•จ๊ป˜ ์—…๊ทธ๋ ˆ์ด๋“œํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค.
    kubeadm alpha phase ensure-etcd-tls ?.
    ์ด ๋‹จ๊ณ„๋Š” ํด๋Ÿฌ์Šคํ„ฐ ์—…๊ทธ๋ ˆ์ด๋“œ์™€ ๊ด€๊ณ„์—†์ด ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์–ด์•ผํ•ฉ๋‹ˆ๋‹ค.
    ํด๋Ÿฌ์Šคํ„ฐ ์—…๊ทธ๋ ˆ์ด๋“œ ์ค‘์—๋Š” ๋ชจ๋“  ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์—…๋ฐ์ดํŠธํ•˜๊ธฐ ์ „์—์ด ๋‹จ๊ณ„๋ฅผ ์‹คํ–‰ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค.


1.11์˜ ๊ฒฝ์šฐ ๋‹ค์Œ ์„ ์›ํ•ฉ๋‹ˆ๋‹ค.

  • ์—…๊ทธ๋ ˆ์ด๋“œ ๋œ ์ •์  ํŒŸ (Pod)์˜ ๋Ÿฐํƒ€์ž„ ๊ฒ€์‚ฌ์— kubelet API๋ฅผ ์‚ฌ์šฉํ•˜์‹ญ์‹œ์˜ค.
    ์šฐ๋ฆฌ๊ฐ€ ํ˜„์žฌํ•˜๊ณ ์žˆ๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ๋กœ์ปฌ ํ”„๋กœ์„ธ์Šค๋ฅผ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๊ธฐ ์œ„ํ•ด apiserver ๋ฐ etcd์— ์˜์กดํ•˜๋Š” ๊ฒƒ์€ ๋ฐ”๋žŒ์งํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.
    ํฌ๋“œ์— ๋Œ€ํ•œ ๋กœ์ปฌ ๋ฐ์ดํ„ฐ ์†Œ์Šค๋Š” ๊ณ ์ฐจ ๋ถ„์‚ฐ kubernetes ๊ตฌ์„ฑ ์š”์†Œ์— ์˜์กดํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค ์šฐ์ˆ˜ํ•ฉ๋‹ˆ๋‹ค.
    ์ด๋Š” ์—…๊ทธ๋ ˆ์ด๋“œ ๋ฃจํ”„์—์„œ ํ˜„์žฌ ํฌ๋“œ ๋Ÿฐํƒ€์ž„ ๊ฒ€์‚ฌ๋ฅผ ๋Œ€์ฒดํ•ฉ๋‹ˆ๋‹ค.
    ์ด๋ฅผ ํ†ตํ•ด ensure-etcd-tls ๋‹จ๊ณ„์— ๊ฒ€์‚ฌ๋ฅผ ์ถ”๊ฐ€ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

๋Œ€์•ˆ : CRI๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํฌ๋“œ ์ •๋ณด๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค ( crictl ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ๋ชจ ์‹คํ–‰ ๊ฐ€๋Šฅ).
์ฃผ์˜ ์‚ฌํ•ญ : dockershim ๋ฐ ๊ธฐํƒ€ ์ปจํ…Œ์ด๋„ˆ ๋Ÿฐํƒ€์ž„์˜ CRI๋Š” ํ˜„์žฌ CRI ์ฃผ์š” ๋ณ€๊ฒฝ ์‚ฌํ•ญ์— ๋Œ€ํ•œ ํ•˜์œ„ ํ˜ธํ™˜์„ฑ์„ ์ง€์›ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.

ํ•  ๊ฒƒ:

  • []์ด 4 ๊ฐ€์ง€ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์— ๋Œ€ํ•œ ๊ณต๊ฐœ ๋ฐ ๋งํฌ ๋ฌธ์ œ.

์ •์  ํฌ๋“œ ์—…๋ฐ์ดํŠธ ๊ฒฝ์Ÿ ์กฐ๊ฑด์„ ํ•ด๊ฒฐํ•˜๊ธฐ์œ„ํ•œ PR : https://github.com/kubernetes/kubernetes/pull/61942
๋ฆด๋ฆฌ์Šค -1.10 ๋ธŒ๋žœ์น˜์— ๋Œ€ํ•œ ์ฒด๋ฆฌ ํ”ฝ PR : https://github.com/kubernetes/kubernetes/pull/61954

@detiber ์šฐ๋ฆฌ๊ฐ€ ๋งํ•˜๋Š” ๊ฒฝ์Ÿ ์กฐ๊ฑด์„ ์„ค๋ช…ํ•ด ์ฃผ์‹œ๊ฒ ์Šต๋‹ˆ๊นŒ? ๋‚˜๋Š” kubeadm ๋‚ด๋ถ€์— ์ต์ˆ™ํ•˜์ง€ ์•Š์ง€๋งŒ ํฅ๋ฏธ๋กญ๊ฒŒ ๋“ค๋ฆฝ๋‹ˆ๋‹ค.

@codepainters ์ฐธ์กฐ https://github.com/kubernetes/kubeadm/issues/740#issuecomment -377263347

์ฐธ๊ณ -1.9.3์—์„œ ๋™์ผํ•œ ๋ฌธ์ œ / ๋ฌธ์ œ ์—…๊ทธ๋ ˆ์ด๋“œ
์—ฌ๋Ÿฌ ๋ฒˆ ๋‹ค์‹œ ์‹œ๋„ํ•˜๋Š” ํ•ด๊ฒฐ ๋ฐฉ๋ฒ•์„ ์‹œ๋„ํ–ˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ API ์„œ๋ฒ„์™€ ๊ฒฝ์Ÿ ์กฐ๊ฑด์— ๋„๋‹ฌํ–ˆ๊ณ  ์—…๊ทธ๋ ˆ์ด๋“œ๋ฅผ ๋กค๋ฐฑ ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค.

@stealthybox thx, ์ฒ˜์Œ ์ฝ์—ˆ์„ ๋•Œ ์–ป์ง€ ๋ชปํ–ˆ์Šต๋‹ˆ๋‹ค.

๋™์ผํ•œ ๋ฌธ์ œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค .. [ERROR APIServerHealth] : API ์„œ๋ฒ„๊ฐ€ ๋น„์ •์ƒ์ž…๋‹ˆ๋‹ค. / healthz๊ฐ€ "ok"๋ฅผ ๋ฐ˜ํ™˜ํ•˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค.
[์˜ค๋ฅ˜ MasterNodesReady] : ํด๋Ÿฌ์Šคํ„ฐ์˜ ๋งˆ์Šคํ„ฐ๋ฅผ ๋‚˜์—ด ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์—…๊ทธ๋ ˆ์ด๋“œํ•˜๋Š” ๋™์•ˆ https๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค ........ ์ด๊ฑธ ๋„์™€์ฃผ์„ธ์š”. 1.9.3์—์„œ 1.10.0์œผ๋กœ ์—…๊ทธ๋ ˆ์ด๋“œํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ฒ˜์Œ์—๋Š” "[upgrade / staticpods] kubelet์ด ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ๋‹ค์‹œ ์‹œ์ž‘ํ•˜๊ธฐ๋ฅผ ๊ธฐ๋‹ค๋ฆฌ๋Š” ์ค‘"์ด๋ผ๋Š” ํŠน์ • ์ง€์ ์— ๋„๋‹ฌ ํ•  ์ˆ˜์žˆ์—ˆ์Šต๋‹ˆ๋‹ค.

์ž„์‹œ ํ•ด๊ฒฐ ๋ฐฉ๋ฒ•์€ ์ธ์ฆ์„œ๋ฅผ ํ™•์ธํ•˜๊ณ  ๊ฒ€์‚ฌ๋ฅผ ์šฐํšŒํ•˜์—ฌ etcd ๋ฐ apiserver ํฌ๋“œ๋ฅผ ์—…๊ทธ๋ ˆ์ด๋“œํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.

๊ตฌ์„ฑ์„ ํ™•์ธํ•˜๊ณ  ์‚ฌ์šฉ ์‚ฌ๋ก€์— ๋Œ€ํ•œ ํ”Œ๋ž˜๊ทธ๋ฅผ ์ถ”๊ฐ€ํ•˜์‹ญ์‹œ์˜ค.

kubectl -n kube-system edit cm kubeadm-config  # change featureFlags
...
  featureGates: {}
...
kubeadm alpha phase certs all
kubeadm alpha phase etcd local
kubeadm alpha phase controlplane all
kubeadm alpha phase upload-config

๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค @stealthybox
๋‚˜๋ฅผ ์œ„ํ•ด upgrade apply ํ”„๋กœ์„ธ์Šค๊ฐ€ [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.1"... ์—์„œ ์ค‘๋‹จ๋˜์—ˆ์ง€๋งŒ ํด๋Ÿฌ์Šคํ„ฐ๊ฐ€ ์„ฑ๊ณต์ ์œผ๋กœ ์—…๊ทธ๋ ˆ์ด๋“œ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

@stealthybox ํ™•์‹คํ•˜์ง€ ์•Š์ง€๋งŒ kubeadm upgrade plan ์ด (๊ฐ€) ๊ทธ ํ›„์— ์ค‘๋‹จ๋˜๊ธฐ ๋•Œ๋ฌธ์—์ด ๋‹จ๊ณ„๋ฅผ ์ˆ˜ํ–‰ ํ•œ ํ›„์— ๋ญ”๊ฐ€ ์†์ƒ๋œ ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค.

[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.10.1
[upgrade/versions] kubeadm version: v1.10.1
[upgrade/versions] Latest stable version: v1.10.1

์—…๋ฐ์ดํŠธ๋ฅผ ์ ์šฉ ํ•  ๋•Œ [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.1"... ๋„ ๊ต์ˆ˜ํ˜•

@kvaps @stealthybox ์ด๊ฒƒ์€ etcd ๋ฌธ์ œ ์ผ ๊ฐ€๋Šฅ์„ฑ์ด ๊ฐ€์žฅ ๋†’์Šต๋‹ˆ๋‹ค ( kubeadm ๋Š” TLS ์ง€์› etcd ์—๊ฒŒ ํ‰๋ฒ”ํ•œ HTTP/2 ์„ ๋งํ•ฉ๋‹ˆ๋‹ค). ์ด ๋‹ค๋ฅธ ๋ฌธ์ œ๋ฅผ ์ฐธ์กฐํ•˜์‹ญ์‹œ์˜ค : https://github.com/kubernetes/kubeadm/issues/755

์†”์งํžˆ, TLS ๋ฐ ๋น„ TLS etcd ๋ฆฌ์Šค๋„ˆ ๋ชจ๋‘์— ๋™์ผํ•œ TCP ํฌํŠธ๊ฐ€ ์‚ฌ์šฉ๋˜๋Š” ์ด์œ ๋ฅผ ์ดํ•ดํ•  ์ˆ˜ ์—†์œผ๋ฉฐ, ์ด์™€ ๊ฐ™์€ ๋ฌธ์ œ ๋งŒ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค. ๋ถ„๋ช… ํ•ด์ง€๋ฉด _ ์—ฐ๊ฒฐ์ด ๊ฑฐ๋ถ€ ๋จ _์€ ์ฆ‰๊ฐ์ ์ธ ํžŒํŠธ๋ฅผ ์ค„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ ๋ฌด์Šจ ์ผ์ด ์ผ์–ด๋‚˜๊ณ  ์žˆ๋Š”์ง€ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•ด tcpdump ์— ์˜์กดํ•ด์•ผํ–ˆ์Šต๋‹ˆ๋‹ค.

์˜ค!
Etcd ์ƒํƒœ ํ™•์ธ์„์œ„ํ•œ ๋‚ด ๋กœ์ปฌ TLS ํŒจ์น˜์—์„œ๋งŒ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค.

์—…๊ทธ๋ ˆ์ด๋“œ๋ฅผ ์™„๋ฃŒํ•˜๋ ค๋ฉด ๋‹ค์Œ์„ ์ˆ˜ํ–‰ํ•˜์‹ญ์‹œ์˜ค.

kubeadm alpha phase controlplane all
kubeadm alpha phase upload-config

์œ„์˜ ํ•ด๊ฒฐ ๋ฐฉ๋ฒ•์„ ์ˆ˜์ •ํ–ˆ์Šต๋‹ˆ๋‹ค.

@stealthybox ๋‘ ๋ฒˆ์งธ kubeadm ๋ช…๋ น์ด ์ž‘๋™ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.

# kubeadm alpha phase upload-config
The --config flag is mandatory

@renich ๋Š” ๊ตฌ์„ฑ์˜ ํŒŒ์ผ ๊ฒฝ๋กœ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค.

์‚ฌ์šฉ์ž ์ง€์ • ์„ค์ •์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ ๋นˆ ํŒŒ์ผ๋กœ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
๋‹ค์Œ์€ bash์—์„œ์ด๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค.

1.10_kubernetes/server/bin/kubeadm alpha phase upload-config --config <(echo)

์ด ๋ฌธ์ œ๋Š” ์ด์ œ https://github.com/kubernetes/kubernetes/pull/62655 ์˜ ๋ณ‘ํ•ฉ์œผ๋กœ ํ•ด๊ฒฐ๋˜์–ด์•ผํ•˜๋ฉฐ v1.10.2 ๋ฆด๋ฆฌ์Šค์˜ ์ผ๋ถ€๊ฐ€ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค.

kubeadm 1.10.2๋กœ 1.10.0-> 1.10.2 ์—…๊ทธ๋ ˆ์ด๋“œ๊ฐ€ ์›ํ™œํ•˜๊ณ  ์‹œ๊ฐ„ ์ดˆ๊ณผ๊ฐ€ ์—†์Œ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

1.10.0-> 1.10.2์— ์—ฌ์ „ํžˆ ์‹œ๊ฐ„ ์ดˆ๊ณผ๊ฐ€ ์žˆ์ง€๋งŒ ๋‹ค๋ฅธ ์‹œ๊ฐ„์ด ์žˆ์Šต๋‹ˆ๋‹ค.
[upgrade/staticpods] Waiting for the kubelet to restart the component Static pod: kube-apiserver-master hash: a273591d3207fcd9e6fd0c308cc68d64 [upgrade/apply] FATAL: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [timed out waiting for the condition]

์–ด๋–ป๊ฒŒํ•ด์•ผํ• ์ง€ ๋ชจ๋ฅด๊ฒ ์Šต๋‹ˆ๋‹ค ...

@ denis111 ์€ docker ps ์‚ฌ์šฉํ•˜์—ฌ ์—…๊ทธ๋ ˆ์ด๋“œ๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ๋™์•ˆ API ์„œ๋ฒ„ ๋กœ๊ทธ๋ฅผ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ๋‚˜๋Š” ๋‹น์‹ ์ด ๋‚˜๋„ ์ง๋ฉดํ•˜๊ณ ์žˆ๋Š” ๋ฌธ์ œ์— ์ง๋ฉดํ–ˆ์„ ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค.

@dvdmuckle ๊ธ€์Ž„, ๊ทธ ๋กœ๊ทธ์— ์˜ค๋ฅ˜๊ฐ€ ์—†์œผ๋ฉฐ I์™€ ๋ช‡ W๋กœ ์‹œ์ž‘ํ•˜๋Š” ํ•ญ๋ชฉ ๋งŒ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค.
๊ทธ๋ฆฌ๊ณ  kube-apiserver์˜ ํ•ด์‹œ๋Š” ์—…๊ทธ๋ ˆ์ด๋“œ ์ค‘์— ๋ณ€๊ฒฝ๋˜์ง€ ์•Š๋Š”๋‹ค๊ณ  ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค.

1.9.3์— ARM64 ํด๋Ÿฌ์Šคํ„ฐ๊ฐ€ ์žˆ๊ณ  1.9.7๋กœ ์„ฑ๊ณต์ ์œผ๋กœ ์—…๋ฐ์ดํŠธ๋˜์—ˆ์ง€๋งŒ 1.9.7์—์„œ 1.10.2๋กœ ์—…๊ทธ๋ ˆ์ด๋“œํ•˜๋Š” ๋ฐ ๋™์ผํ•œ ์‹œ๊ฐ„ ์ดˆ๊ณผ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ–ˆ์Šต๋‹ˆ๋‹ค.

๋‚˜๋Š” ์‹ฌ์ง€์–ด kubeadm์„ ํŽธ์ง‘ํ•˜๊ณ  ๋‹ค์‹œ ์ปดํŒŒ์ผํ•˜์—ฌ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋กœ ํƒ€์ž„ ์•„์›ƒ (๋งˆ์ง€๋ง‰ ์ปค๋ฐ‹ https://github.com/anguslees/kubernetes/commits/kubeadm-gusfork์™€ ๊ฐ™์ด)์„ ๋Š˜๋ ค ๋ณด์•˜์Šต๋‹ˆ๋‹ค.

$ sudo kubeadm upgrade apply  v1.10.2 --force
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/version] You have chosen to change the cluster version to "v1.10.2"
[upgrade/versions] Cluster version: v1.9.7
[upgrade/versions] kubeadm version: v1.10.2-dirty
[upgrade/version] Found 1 potential version compatibility errors but skipping since the --force flag is set:

   - Specified version to upgrade to "v1.10.2" is higher than the kubeadm version "v1.10.2-dirty". Upgrade kubeadm first using the tool you used to install kubeadm
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler]
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.10.2"...
Static pod: kube-apiserver-kubemaster1 hash: ed7578d5bf9314188dca798386bcfb0e
Static pod: kube-controller-manager-kubemaster1 hash: e0c3f578f1c547dcf9996e1d3390c10c
Static pod: kube-scheduler-kubemaster1 hash: 52e767858f52ac4aba448b1a113884ee
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-kubemaster1 hash: 413224efa82e36533ce93e30bd18e3a8
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests346927148/etcd.yaml"
[certificates] Using the existing etcd/ca certificate and key.
[certificates] Using the existing etcd/server certificate and key.
[certificates] Using the existing etcd/peer certificate and key.
[certificates] Using the existing etcd/healthcheck-client certificate and key.
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests190581659/etcd.yaml"
[upgrade/staticpods] Not waiting for pod-hash change for component "etcd"
[upgrade/etcd] Waiting for etcd to become available
[util/etcd] Waiting 30s for initial delay
[util/etcd] Attempting to get etcd status 1/10
[util/etcd] Attempt failed with error: dial tcp 127.0.0.1:2379: getsockopt: connection refused
[util/etcd] Waiting 15s until next retry
[util/etcd] Attempting to get etcd status 2/10
[util/etcd] Attempt failed with error: dial tcp 127.0.0.1:2379: getsockopt: connection refused
[util/etcd] Waiting 15s until next retry
[util/etcd] Attempting to get etcd status 3/10
[util/etcd] Attempt failed with error: dial tcp 127.0.0.1:2379: getsockopt: connection refused
[util/etcd] Waiting 15s until next retry
[util/etcd] Attempting to get etcd status 4/10
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests346927148"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests346927148/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests346927148/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests346927148/kube-scheduler.yaml"
[upgrade/staticpods] The etcd manifest will be restored if component "kube-apiserver" fails to upgrade
[certificates] Using the existing etcd/ca certificate and key.
[certificates] Using the existing apiserver-etcd-client certificate and key.
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests190581659/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/apply] FATAL: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [timed out waiting for the condition]

v1.10.2-> v1.10.2 ์—…๊ทธ๋ ˆ์ด๋“œ (๋ง๋„ ์•ˆ ๋  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ ์ค‘์ž…๋‹ˆ๋‹ค ...)

Ubuntu 16.04.

๊ทธ๋ฆฌ๊ณ  ์˜ค๋ฅ˜์™€ ํ•จ๊ป˜ ์‹คํŒจํ•ฉ๋‹ˆ๋‹ค.

kubeadm upgrade apply v1.10.2

[upgrade/apply] FATAL: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [timed out waiting for the condition]

์ด ๋ฌธ์ œ๊ฐ€ ์—ฌ์ „ํžˆ ์ถ”์ ๋˜๋Š”์ง€ ๊ถ๊ธˆํ•ฉ๋‹ˆ๋‹ค ... ์ฐพ์„ ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค.

๋˜ํ•œ timed out waiting for the condition ์˜ค๋ฅ˜๋กœ ์ธํ•ด ์—…๊ทธ๋ ˆ์ด๋“œ๊ฐ€ ์—ฌ์ „ํžˆ ์‹คํŒจํ•˜๋Š” ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ์Šต๋‹ˆ๋‹ค.

ํŽธ์ง‘ : ํ† ๋ก ์„ ์ƒˆ ํ‹ฐ์ผ“ https://github.com/kubernetes/kubeadm/issues/850์œผ๋กœ ์ด๋™ํ–ˆ์Šต๋‹ˆ๋‹ค. ๊ฑฐ๊ธฐ์—์„œ ํ† ๋ก ํ•˜์‹ญ์‹œ์˜ค.

๋‹ค๋ฅธ ์‚ฌ๋žŒ์ด 1.9.x์—์„œ์ด ๋ฌธ์ œ๊ฐ€์žˆ๋Š” ๊ฒฝ์šฐ :

์‚ฌ์šฉ์ž ์ง€์ • ํ˜ธ์ŠคํŠธ ์ด๋ฆ„์ด์žˆ๋Š” AWS์—์žˆ๋Š” ๊ฒฝ์šฐ kubeadm-config configmap์„ ํŽธ์ง‘ํ•˜๊ณ  nodeName์—์„œ aws ๋‚ด๋ถ€ ์ด๋ฆ„์„ ์„ค์ •ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ip-xx-xx-xx-xx. $ REGION.compute.internal)

kubectl -n kube-system edit cm kubeadm-config -oyaml

์ด๊ฒƒ์€ etc ํด๋ผ์ด์–ธํŠธ๋ฅผ http๋กœ ์„ค์ •ํ•˜๋Š” ๊ฒƒ ์™ธ์—๋„. ๋‚˜๋Š” ๊ทธ๋“ค์ด ๊ทธ๊ฒƒ์„ ๊ณ ์ณค๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด ์•„์ง ํŽธ์ง€ ๋ฒ„์ „์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค.

์ด๋Š” kubeadm์ด api์—์„œ์ด ๊ฒฝ๋กœ๋ฅผ ์ฝ์œผ๋ ค๊ณ ํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค : / api / v1 / namespaces / kube-system / pods / kube-apiserver- $ NodeName

1.10.6์—์„œ ์‹œ๊ฐ„ ์ œํ•œ์ด ์ฆ๊ฐ€ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์— ๋ช‡ ์ฃผ ์ „์— 1.9.7 ๋ฐฐํฌ๋ฅผ 1.10.6์œผ๋กœ ์„ฑ๊ณต์ ์œผ๋กœ ์—…๋ฐ์ดํŠธํ–ˆ์Šต๋‹ˆ๋‹ค.

์ด ๋ฒ„์ „์— ๋™์ผํ•œ ๋ณ€๊ฒฝ ์‚ฌํ•ญ์ด ์ ์šฉ๋˜๋ฏ€๋กœ .deb ํŒจํ‚ค์ง€๊ฐ€ ์ค€๋น„๋˜๋Š” ์ฆ‰์‹œ 1.11.2๋กœ ์—…๊ทธ๋ ˆ์ด๋“œ ํ•  ๊ณ„ํš์ž…๋‹ˆ๋‹ค.

๋‚ด ํด๋Ÿฌ์Šคํ„ฐ๋Š” ARM64 ๋ณด๋“œ์—์„œ ์˜จ ํ”„๋ ˆ๋ฏธ์Šค๋กœ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค.

์ด ํŽ˜์ด์ง€๊ฐ€ ๋„์›€์ด ๋˜์—ˆ๋‚˜์š”?
0 / 5 - 0 ๋“ฑ๊ธ‰