๋ฒ๊ทธ ๋ณด๊ณ
kubeadm ๋ฒ์ 1.11
ํ๊ฒฝ :
kubectl version
์ฌ์ฉ): 1.11uname -a
): 3.10.0-693.17.1.el7.x86_64kubeadm init ํ coreos ํฌ๋๊ฐ ์ค๋ฅ ์ํ๋ก ์ ์ง๋จ
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-ljdjp 0/1 Error 6 9m
coredns-78fcdf6894-p6flm 0/1 Error 6 9m
etcd-master 1/1 Running 0 8m
heapster-5bbdfbff9f-h5h2n 1/1 Running 0 9m
kube-apiserver-master 1/1 Running 0 8m
kube-controller-manager-master 1/1 Running 0 8m
kube-proxy-5642r 1/1 Running 0 9m
kube-scheduler-master 1/1 Running 0 8m
kubernetes-dashboard-6948bdb78-bwkvx 1/1 Running 0 9m
weave-net-r5jkg 2/2 Running 0 9m
๋ ํฌ๋์ ๋ก๊ทธ์๋ ๋ค์์ด ํ์๋ฉ๋๋ค.
standard_init_linux.go:178: exec user process caused "operation not permitted"
@kubernetes/sig-network-bugs
@carlosmkb , ๋์ปค ๋ฒ์ ์ด ๋ฌด์์ ๋๊น?
๋ฏฟ๊ธฐ โโ์ด๋ ต์ต๋๋ค. ์ฐ๋ฆฌ ์ธก์์ CentOS 7์ ๊ฝค ๊ด๋ฒ์ํ๊ฒ ํ ์คํธํฉ๋๋ค.
์์คํ ๋ฐ ํฌ๋ ๋ก๊ทธ๊ฐ ์์ต๋๊น?
@dims , ์๋ฏธ๊ฐ ์์ ์ ์์ต๋๋ค. ์๋ํด ๋ณด๊ฒ ์ต๋๋ค.
@neolit123 ๋ฐ @timothysc
๋์ปค ๋ฒ์ : docker-1.13.1-63.git94f4240.el7.centos.x86_64
coredns ํฌ๋ ๋ก๊ทธ : standard_init_linux.go:178: exec user process caused "operation not permitted"
์์คํ
๋ก๊ทธ journalctl -xeu kubelet
:
Jul 17 23:45:17 server.raid.local kubelet[20442]: E0717 23:45:17.679867 20442 pod_workers.go:186] Error syncing pod dd030886-89f4-11e8-9786-0a92797fa29e ("cas-7d6d97c7bd-mzw5j_raidcloud(dd030886-89f4-11e8-9786-0a92797fa29e)"), skipping: failed to "StartContainer" for "cas" with ImagePullBackOff: "Back-off pulling image \"registry.raidcloud.io/raidcloud/cas:180328.pvt.01\""
Jul 17 23:45:18 server.raid.local kubelet[20442]: I0717 23:45:18.679059 20442 kuberuntime_manager.go:513] Container {Name:json2ldap Image:registry.raidcloud.io/raidcloud/json2ldap:180328.pvt.01 Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:default-token-f2cmq ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 17 23:45:18 server.raid.local kubelet[20442]: E0717 23:45:18.680001 20442 pod_workers.go:186] Error syncing pod dcc39ce2-89f4-11e8-9786-0a92797fa29e ("json2ldap-666fc85686-tmxrr_raidcloud(dcc39ce2-89f4-11e8-9786-0a92797fa29e)"), skipping: failed to "StartContainer" for "json2ldap" with ImagePullBackOff: "Back-off pulling image \"registry.raidcloud.io/raidcloud/json2ldap:180328.pvt.01\""
Jul 17 23:45:21 server.raid.local kubelet[20442]: I0717 23:45:21.678232 20442 kuberuntime_manager.go:513] Container {Name:coredns Image:k8s.gcr.io/coredns:1.1.3 Command:[] Args:[-conf /etc/coredns/Corefile] WorkingDir: Ports:[{Name:dns HostPort:0 ContainerPort:53 Protocol:UDP HostIP:} {Name:dns-tcp HostPort:0 ContainerPort:53 Protocol:TCP HostIP:} {Name:metrics HostPort:0 ContainerPort:9153 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:178257920 scale:0} d:{Dec:<nil>} s:170Mi Format:BinarySI}] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:73400320 scale:0} d:{Dec:<nil>} s:70Mi Format:BinarySI}]} VolumeMounts:[{Name:config-volume ReadOnly:true MountPath:/etc/coredns SubPath: MountPropagation:<nil>} {Name:coredns-token-6nhgg ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 17 23:45:21 server.raid.local kubelet[20442]: I0717 23:45:21.678311 20442 kuberuntime_manager.go:757] checking backoff for container "coredns" in pod "coredns-78fcdf6894-znfvw_kube-system(9b44aa92-89f7-11e8-9786-0a92797fa29e)"
Jul 17 23:45:21 server.raid.local kubelet[20442]: I0717 23:45:21.678404 20442 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=coredns pod=coredns-78fcdf6894-znfvw_kube-system(9b44aa92-89f7-11e8-9786-0a92797fa29e)
Jul 17 23:45:21 server.raid.local kubelet[20442]: E0717 23:45:21.678425 20442 pod_workers.go:186] Error syncing pod 9b44aa92-89f7-11e8-9786-0a92797fa29e ("coredns-78fcdf6894-znfvw_kube-system(9b44aa92-89f7-11e8-9786-0a92797fa29e)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-78fcdf6894-znfvw_kube-system(9b44aa92-89f7-11e8-9786-0a92797fa29e)"
Jul 17 23:45:22 server.raid.local kubelet[20442]: I0717 23:45:22.679145 20442 kuberuntime_manager.go:513] Container {Name:login Image:registry.raidcloud.io/raidcloud/admin:180329.pvt.05 Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:login-config ReadOnly:true MountPath:/usr/share/nginx/conf/ SubPath: MountPropagation:<nil>} {Name:default-token-f2cmq ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 17 23:45:22 server.raid.local kubelet[20442]: E0717 23:45:22.679941 20442 pod_workers.go:186] Error syncing pod dc8392a9-89f4-11e8-9786-0a92797fa29e ("login-85ffb66bb8-5l9fq_raidcloud(dc8392a9-89f4-11e8-9786-0a92797fa29e)"), skipping: failed to "StartContainer" for "login" with ImagePullBackOff: "Back-off pulling image \"registry.raidcloud.io/raidcloud/admin:180329.pvt.05\""
Jul 17 23:45:23 server.raid.local kubelet[20442]: I0717 23:45:23.678172 20442 kuberuntime_manager.go:513] Container {Name:coredns Image:k8s.gcr.io/coredns:1.1.3 Command:[] Args:[-conf /etc/coredns/Corefile] WorkingDir: Ports:[{Name:dns HostPort:0 ContainerPort:53 Protocol:UDP HostIP:} {Name:dns-tcp HostPort:0 ContainerPort:53 Protocol:TCP HostIP:} {Name:metrics HostPort:0 ContainerPort:9153 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[memory:{i:{value:178257920 scale:0} d:{Dec:<nil>} s:170Mi Format:BinarySI}] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI} memory:{i:{value:73400320 scale:0} d:{Dec:<nil>} s:70Mi Format:BinarySI}]} VolumeMounts:[{Name:config-volume ReadOnly:true MountPath:/etc/coredns SubPath: MountPropagation:<nil>} {Name:coredns-token-6nhgg ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jul 17 23:45:23 server.raid.local kubelet[20442]: I0717 23:45:23.678412 20442 kuberuntime_manager.go:757] checking backoff for container "coredns" in pod "coredns-78fcdf6894-lcqt5_kube-system(9b45a068-89f7-11e8-9786-0a92797fa29e)"
Jul 17 23:45:23 server.raid.local kubelet[20442]: I0717 23:45:23.678532 20442 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=coredns pod=coredns-78fcdf6894-lcqt5_kube-system(9b45a068-89f7-11e8-9786-0a92797fa29e)
Jul 17 23:45:23 server.raid.local kubelet[20442]: E0717 23:45:23.678554 20442 pod_workers.go:186] Error syncing pod 9b45a068-89f7-11e8-9786-0a92797fa29e ("coredns-78fcdf6894-lcqt5_kube-system(9b45a068-89f7-11e8-9786-0a92797fa29e)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=coredns pod=coredns-78fcdf6894-lcqt5_kube-system(9b45a068-89f7-11e8-9786-0a92797fa29e)"
๊ณผ๊ฑฐ์ ๋ค๋ฅธ ์๋๋ฆฌ์ค์์ ๋ณด๊ณ ๋ ๋์ผํ ์ค๋ฅ์ ๋ช ๊ฐ์ง ์ธ์คํด์ค๋ฅผ ์ฐพ์์ต๋๋ค.
CoreDNS ๋ฐฐํฌ์์ "allowPrivilegeEscalation: false"๋ฅผ ์ ๊ฑฐํ์ฌ ๋์์ด ๋๋์ง ํ์ธํ ์ ์์ต๋๋ค.
์ ์๊ฒ๋ ๊ฐ์ ๋ฌธ์ ์ ๋๋ค. ์ ์ฌํ ์ค์ CentOS 7.4.1708, Docker ๋ฒ์ 1.13.1, ๋น๋ 94f4240/1.13.1(CentOS์ ํจ๊ป ์ ๊ณต):
[root@faas-A01 ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-2vssv 2/2 Running 0 9m
kube-system calico-node-4vr7t 2/2 Running 0 7m
kube-system calico-node-nlfnd 2/2 Running 0 17m
kube-system calico-node-rgw5w 2/2 Running 0 23m
kube-system coredns-78fcdf6894-p4wbl 0/1 CrashLoopBackOff 9 30m
kube-system coredns-78fcdf6894-r4pwf 0/1 CrashLoopBackOff 9 30m
kube-system etcd-faas-a01.sl.cloud9.ibm.com 1/1 Running 0 29m
kube-system kube-apiserver-faas-a01.sl.cloud9.ibm.com 1/1 Running 0 29m
kube-system kube-controller-manager-faas-a01.sl.cloud9.ibm.com 1/1 Running 0 29m
kube-system kube-proxy-55csj 1/1 Running 0 17m
kube-system kube-proxy-56r8c 1/1 Running 0 30m
kube-system kube-proxy-kncql 1/1 Running 0 9m
kube-system kube-proxy-mf2bp 1/1 Running 0 7m
kube-system kube-scheduler-faas-a01.sl.cloud9.ibm.com 1/1 Running 0 29m
[root@faas-A01 ~]# kubectl logs --namespace=all coredns-78fcdf6894-p4wbl
Error from server (NotFound): namespaces "all" not found
[root@faas-A01 ~]# kubectl logs --namespace=kube-system coredns-78fcdf6894-p4wbl
standard_init_linux.go:178: exec user process caused "operation not permitted"
๋ง์ผ์ ๋๋นํ์ฌ selinux๋ ๋ชจ๋ ๋ ธ๋์์ ํ์ฉ ๋ชจ๋์ ๋๋ค.
์ ๋ Calico๋ฅผ ์ฌ์ฉํ๊ณ ์์ต๋๋ค(@carlosmkb๋ก ์ง์ง ์์).
[ root@faas-A01 ~]# kubectl ๋ก๊ทธ --namespace=kube-system coredns-78fcdf6894-p4wbl
standard_init_linux.go:178: exec ์ฌ์ฉ์ ํ๋ก์ธ์ค๋ก ์ธํด "์์ ์ด ํ์ฉ๋์ง ์์"์ด ๋ฐ์ํ์ต๋๋ค.
์ - ์ด๊ฒ์ ๋ก๊ทธ์ ๋ด์ฉ์ด ์๋๋ผ ๋ก๊ทธ๋ฅผ ๊ฐ์ ธ์ค๋ ค๊ณ ํ ๋ kubectl์ ์ค๋ฅ์ ๋๋ค...
@chrisohaver kubectl logs
๋ ๋ค๋ฅธ kube ์์คํ
ํฌ๋์ ํจ๊ป ์๋ํฉ๋๋ค.
ํ์ธ - CoreDNS ๋ฐฐํฌ์์ "allowPrivilegeEscalation: false"๋ฅผ ์ ๊ฑฐํ์ฌ ๋์์ด ๋๋์ง ํ์ธํด๋ณด์ จ์ต๋๊น?
... coredns ํฌ๋์ kubectl describe
๊ฐ ํฅ๋ฏธ๋ก์ด ๊ฒ์ ๋ณด์ฌ์ค๋๊น?
์ ์๊ฒ๋ ๊ฐ์ ๋ฌธ์ ์
๋๋ค.
CentOS Linux ๋ฆด๋ฆฌ์ค 7.5.1804(์ฝ์ด)
Docker ๋ฒ์ 1.13.1, ๋น๋ dded712/1.13.1
cni ์ ๋์จ์ผ๋ก ํ๋๋ฌ
[root<strong i="9">@k8s</strong> ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-cfmm7 0/1 CrashLoopBackOff 12 15m
kube-system coredns-78fcdf6894-k65js 0/1 CrashLoopBackOff 11 15m
kube-system etcd-k8s.master 1/1 Running 0 14m
kube-system kube-apiserver-k8s.master 1/1 Running 0 13m
kube-system kube-controller-manager-k8s.master 1/1 Running 0 14m
kube-system kube-flannel-ds-fts6v 1/1 Running 0 14m
kube-system kube-proxy-4tdb5 1/1 Running 0 15m
kube-system kube-scheduler-k8s.master 1/1 Running 0 14m
[root<strong i="10">@k8s</strong> ~]# kubectl logs coredns-78fcdf6894-cfmm7 -n kube-system
standard_init_linux.go:178: exec user process caused "operation not permitted"
[root<strong i="11">@k8s</strong> ~]# kubectl describe pods coredns-78fcdf6894-cfmm7 -n kube-system
Name: coredns-78fcdf6894-cfmm7
Namespace: kube-system
Node: k8s.master/192.168.150.40
Start Time: Fri, 27 Jul 2018 00:32:09 +0800
Labels: k8s-app=kube-dns
pod-template-hash=3497892450
Annotations: <none>
Status: Running
IP: 10.244.0.12
Controlled By: ReplicaSet/coredns-78fcdf6894
Containers:
coredns:
Container ID: docker://3b7670fbc07084410984d7e3f8c0fa1b6d493a41d2a4e32f5885b7db9d602417
Image: k8s.gcr.io/coredns:1.1.3
Image ID: docker-pullable://k8s.gcr.io/coredns<strong i="12">@sha256</strong>:db2bf53126ed1c761d5a41f24a1b82a461c85f736ff6e90542e9522be4757848
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 27 Jul 2018 00:46:30 +0800
Finished: Fri, 27 Jul 2018 00:46:30 +0800
Ready: False
Restart Count: 12
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-vqslm (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-vqslm:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-vqslm
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 16m (x6 over 16m) default-scheduler 0/1 nodes are available: 1 node(s) were not ready.
Normal Scheduled 16m default-scheduler Successfully assigned kube-system/coredns-78fcdf6894-cfmm7 to k8s.master
Warning BackOff 14m (x10 over 16m) kubelet, k8s.master Back-off restarting failed container
Normal Pulled 14m (x5 over 16m) kubelet, k8s.master Container image "k8s.gcr.io/coredns:1.1.3" already present on machine
Normal Created 14m (x5 over 16m) kubelet, k8s.master Created container
Normal Started 14m (x5 over 16m) kubelet, k8s.master Started container
Normal Pulled 11m (x4 over 12m) kubelet, k8s.master Container image "k8s.gcr.io/coredns:1.1.3" already present on machine
Normal Created 11m (x4 over 12m) kubelet, k8s.master Created container
Normal Started 11m (x4 over 12m) kubelet, k8s.master Started container
Warning BackOff 2m (x56 over 12m) kubelet, k8s.master Back-off restarting failed container
[root<strong i="13">@k8s</strong> ~]# uname
Linux
[root<strong i="14">@k8s</strong> ~]# uname -a
Linux k8s.master 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[root<strong i="15">@k8s</strong> ~]# cat /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)
[root<strong i="16">@k8s</strong> ~]# docker --version
Docker version 1.13.1, build dded712/1.13.1
selinux๊ฐ ํ์ฉ ๋ชจ๋์ ์์ ๋๋ ๋์ผํ ๋ฌธ์ ๊ฐ ์์ต๋๋ค. /etc/selinux/conf SELINUX=disabled์์ ๋นํ์ฑํํ๊ณ ์์คํ ์ ์ฌ๋ถํ ํ๋ฉด ํฌ๋๊ฐ ์์๋ฉ๋๋ค.
๋ ๋ํ 7.4, ์ปค๋ 3.10.0-693.11.6.el7.x86_64
๋์ปค-1.13.1-68.gitdded712.el7.x86_64
์ฐธ๊ณ ๋ก SELinux๊ฐ ๋นํ์ฑํ๋ ์ํ์์๋ ์๋ํฉ๋๋ค(ํ์ฉ๋์ง๋ ์์ง๋ง _disabled_).
Docker ๋ฒ์ 1.13.1, ๋น๋ dded712/1.13.1
์ผํธOS 7
[root<strong i="8">@centosk8s</strong> ~]# kubectl logs coredns-78fcdf6894-rhx9p -n kube-system
.:53
CoreDNS-1.1.3
linux/amd64, go1.10.1, b0fd575c
2018/07/27 16:37:31 [INFO] CoreDNS-1.1.3
2018/07/27 16:37:31 [INFO] linux/amd64, go1.10.1, b0fd575c
2018/07/27 16:37:31 [INFO] plugin/reload: Running configuration MD5 = 2a066f12ec80aeb2b92740dd74c17138
์ฐ๋ฆฌ๋ ์ด ๋ฌธ์ ๋ฅผ ๊ฒช๊ณ ์์ผ๋ฉฐ ์๋ํ๋ฅผ ํตํด ์ธํ๋ผ๋ฅผ ํ๋ก๋น์ ๋ํ๋ฏ๋ก selinux๋ฅผ ์์ ํ ๋นํ์ฑํํ๊ธฐ ์ํด ๋ค์ ์์ํด์ผ ํ๋ ๊ฒ์ ํ์ฉ๋์ง ์์ต๋๋ค. ์ด ๋ฌธ์ ๊ฐ ํด๊ฒฐ๋๊ธฐ๋ฅผ ๊ธฐ๋ค๋ฆฌ๋ ๋ค๋ฅธ ํด๊ฒฐ ๋ฐฉ๋ฒ์ด ์์ต๋๊น?
CoreDNS ๋ฐฐํฌ์์ "allowPrivilegeEscalation: false"๋ฅผ ์ ๊ฑฐํ์ฌ ๋์์ด ๋๋์ง ํ์ธํ์ญ์์ค.
์ต์ ๋ฒ์ ์ docker(1.13 ์ด์)๋ก ์
๋ฐ์ดํธํ๋ ๊ฒ๋ ๋์์ด ๋ ์ ์์ต๋๋ค.
์ฌ๊ธฐ์๋ ๊ฐ์ ๋ฌธ์
๋์ปค ๋ฒ์ 1.2.6
์ผํธOS 7
@lareeth์ ๋ง์ฐฌ๊ฐ์ง๋ก kubeadm ์ ์ฌ์ฉํ ๋ kubernetes ์๋ํ๋ฅผ ํ๋ก๋น์ ๋ํ๊ณ selinux๋ฅผ ์์ ํ ๋นํ์ฑํํ๊ธฐ ์ํด ๋ค์ ์์ํด์ผ ํ๋ ๊ฒ๋ ํ์ฉ๋์ง ์์ต๋๋ค.
@chrisohaver ๋ selinux๋ฅผ ์์ ํ ๋นํ์ฑํํ๊ธฐ ์ํด ๋ค์ ์์์ ์๊ตฌํ๋ ๊ฒ์ ํ์ฉ๋์ง ์์ต๋๋ค. ์ ์ฉํ๊ฒ ์ฌ์ฉํฉ๋๋ค. ๊ฐ์ฌํฉ๋๋ค !
๊ทธ๋ฌ๋ ๋ด๊ฐ ์๊ณ ์๋ ๋ฐ์ ๊ฐ์ด coredns ์ต์
์ kubeadm ๊ตฌ์ฑ์์ ์ค์ ๋์ง ์์ต๋๋ค.
๋ค๋ฅธ ๋ฐฉ๋ฒ์ ์๋์?
CoreDNS ๋ฐฐํฌ์์ "allowPrivilegeEscalation: false"๋ฅผ ์ ๊ฑฐํ์ฌ ๋์์ด ๋๋์ง ํ์ธํ์ญ์์ค.
์ต์ ๋ฒ์ ์ docker(์: k8s์์ ๊ถ์ฅํ๋ ๋ฒ์ )๋ก ์
๋ฐ์ดํธํ๋ ๊ฒ๋ ๋์์ด ๋ ์ ์์ต๋๋ค.
coredns ๋ฐฐํฌ์์ "allowPrivilegeEscalation: false"๋ฅผ ์ ๊ฑฐํ๋ฉด ๋ฌธ์ ๊ฐ ํด๊ฒฐ๋จ์ ํ์ธํ์ต๋๋ค(SE linux๊ฐ ํ์ฉ ๋ชจ๋์์ ํ์ฑํ๋จ).
๋ํ Kubernetes์์ ๊ถ์ฅํ๋ docker ๋ฒ์ (docker 17.03)์ผ๋ก ์ ๊ทธ๋ ์ด๋ํ๋ฉด "allowPrivilegeEscalation: false"๊ฐ coredns ๋ฐฐํฌ์ ๋จ์ ์๊ณ SELinux๊ฐ ํ์ฉ ๋ชจ๋์์ ํ์ฑํ๋ ์ํ์์ ๋ฌธ์ ๊ฐ ํด๊ฒฐ๋จ์ ํ์ธํ์ต๋๋ค.
๋ฐ๋ผ์ ์ด์ ๋ฒ์ ์ docker์ SELinux ์ฌ์ด์ allowPrivilegeEscalation ์ง์๋ฌธ์ด ์๋ ๋นํธํ์ฑ์ด ์๋ ๊ฒ์ฒ๋ผ ๋ณด์ ๋๋ค. ์ด ์ง์๋ฌธ์ ์ดํ ๋ฒ์ ์ docker์์ ๋ถ๋ช ํ ํด๊ฒฐ๋์์ต๋๋ค.
3๊ฐ์ง ๋ค๋ฅธ ํด๊ฒฐ ๋ฐฉ๋ฒ์ด ์๋ ๊ฒ ๊ฐ์ต๋๋ค.
@chrisohaver ์ต์ ๋ฒ์ ์ docker 17.03์ผ๋ก ์ ๊ทธ๋ ์ด๋ํ์ฌ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ์ต๋๋ค. ๊ณ ๋ง์
์กฐ์ฌ์ ๊ฐ์ฌ๋๋ฆฝ๋๋ค @chrisohaver :100:
๊ฐ์ฌํฉ๋๋ค, @chrisohaver !
์ด๊ฒ์ ํจ๊ณผ๊ฐ ์์๋ค:
kubectl -n kube-system get deployment coredns -o yaml | \
sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \
kubectl apply -f -
@chrisohaver
SELinux ๋
ธ๋์ ๋ํ kubeadm ๋ฌธ์ ํด๊ฒฐ ๊ฐ์ด๋ ์์ ์ด ๋จ๊ณ๋ฅผ ๋ค์ ํ์ผ๋ก ๋ฌธ์ํํด์ผ ํ๋ค๊ณ ์๊ฐํ์ญ๋๊น?
coredns
ํฌ๋์๋ CrashLoopBackOff
๋๋ Error
์ํ๊ฐ ์์ต๋๋ค.์ด์ ๋ฒ์ ์ Docker์ ํจ๊ป SELinux๋ฅผ ์คํํ๋ ๋
ธ๋๊ฐ ์๋ ๊ฒฝ์ฐ coredns
ํ(Pod)์ด ์์๋์ง ์๋ ์๋๋ฆฌ์ค๊ฐ ๋ฐ์ํ ์ ์์ต๋๋ค. ์ด ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๋ ค๋ฉด ๋ค์ ์ต์
์ค ํ๋๋ฅผ ์๋ํด ๋ณด์ธ์.
allowPrivilegeEscalation
๋ฅผ true
๋ก ์ค์ ํ๋๋ก coredns
๋ฐฐํฌ๋ฅผ ์์ ํฉ๋๋ค.kubectl -n kube-system get deployment coredns -o yaml | \
sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \
kubectl apply -f -
์ด๋ป๊ฒ ์๊ฐํ์ญ๋๊น? ๋ญ๊ฐ ๊ฐ์ ํ ์ ์๋ค๊ณ ์๊ฐ๋๋ฉด ํ ์คํธ ์์ ์ ์ ์ํ์ญ์์ค.
๊ด์ฐฎ์. SELinux๋ฅผ ๋นํ์ฑํํ๊ฑฐ๋ allowPrivilegeEscalation ์ค์ ์ ๋ณ๊ฒฝํ ๋ ๋ณด์์ ๋ถ์ ์ ์ธ ์ํฅ์ด ์์์ ์ธ๊ธํด์ผ ํฉ๋๋ค.
๊ฐ์ฅ ์์ ํ ์๋ฃจ์ ์ Docker๋ฅผ Kubernetes๊ฐ ๊ถ์ฅ ํ๋ ๋ฒ์ (17.03)์ผ๋ก ์ ๊ทธ๋ ์ด๋ํ๋ ๊ฒ์ ๋๋ค.
@chrisohaver
์ดํดํ๋ฉด ์ฌ๋ณธ์ ์์ ํ๊ณ ์ด์ ๋ํ PR์ ์ ์ถํ ๊ฒ์
๋๋ค.
stackoverflow์๋ ์ด์ ๋ํ ๋ต๋ณ์ด ์์ต๋๋ค.
https://stackoverflow.com/questions/53075796/coredns-pods-have-crashloopbackoff-or-error-state
์ด ์ค๋ฅ
[FATAL] plugin/loop: Seen "HINFO IN 6900627972087569316.7905576541070882081." more than twice, loop detected
CoreDNS๊ฐ ํ์ธ ๊ตฌ์ฑ์์ ๋ฃจํ๋ฅผ ๊ฐ์งํ ๋ ๋ฐ์ํ๋ฉฐ ์ด๋ ์๋๋ ๋์์ ๋๋ค. ๋น์ ์ ์ด ๋ฌธ์ ๋ฅผ ๊ฒช๊ณ ์์ต๋๋ค:
https://github.com/kubernetes/kubeadm/issues/1162
https://github.com/coredns/coredns/issues/2087
Hacky ์๋ฃจ์ : CoreDNS ๋ฃจํ ๊ฐ์ง ๋นํ์ฑํ
CoreDNS ๊ตฌ์ฑ ๋งต์ ํธ์งํฉ๋๋ค.
kubectl -n kube-system edit configmap coredns
loop
ํ์ ์ ๊ฑฐํ๊ฑฐ๋ ์ฃผ์ ์ฒ๋ฆฌํ๊ณ ์ ์ฅํ๊ณ ์ข
๋ฃํฉ๋๋ค.
๊ทธ๋ฐ ๋ค์ CoreDNS ํฌ๋๋ฅผ ์ ๊ฑฐํ์ฌ ์ ๊ตฌ์ฑ์ผ๋ก ์ ํฌ๋๋ฅผ ์์ฑํ ์ ์์ต๋๋ค.
kubectl -n kube-system delete pod -l k8s-app=kube-dns
๊ทธ ํ์๋ ๋ชจ๋ ๊ฒ์ด ์ข์์ผํฉ๋๋ค.
์ ํธํ๋ ์๋ฃจ์ : DNS ๊ตฌ์ฑ์์ ๋ฃจํ ์ ๊ฑฐ
๋จผ์ systemd-resolved
๋ฅผ ์ฌ์ฉ ์ค์ธ์ง ํ์ธํ์ญ์์ค. Ubuntu 18.04๋ฅผ ์คํ ์ค์ด๋ผ๋ฉด ์๋ง๋ ๊ทธ๋ด ๊ฒ์
๋๋ค.
systemctl list-unit-files | grep enabled | grep systemd-resolved
๊ทธ๋ ๋ค๋ฉด ํด๋ฌ์คํฐ์์ ์ฐธ์กฐ๋ก ์ฌ์ฉ ์ค์ธ resolv.conf
ํ์ผ์ ํ์ธํ์ญ์์ค.
ps auxww | grep kubelet
๋ค์๊ณผ ๊ฐ์ ์ค์ด ํ์๋ ์ ์์ต๋๋ค.
/usr/bin/kubelet ... --resolv-conf=/run/systemd/resolve/resolv.conf
์ค์ํ ๋ถ๋ถ์ --resolv-conf
์
๋๋ค. systemd resolv.conf๊ฐ ์ฌ์ฉ๋๋์ง ์ฌ๋ถ๋ฅผ ์์๋
๋๋ค.
systemd
resolv.conf
์ธ ๊ฒฝ์ฐ ๋ค์์ ์ํํฉ๋๋ค.
/run/systemd/resolve/resolv.conf
์ ๋ด์ฉ์ ํ์ธํ์ฌ ๋ค์๊ณผ ๊ฐ์ ๋ ์ฝ๋๊ฐ ์๋์ง ํ์ธํ์ญ์์ค.
nameserver 127.0.0.1
127.0.0.1
๊ฐ ์์ผ๋ฉด ๋ฃจํ๋ฅผ ์ผ์ผํค๋ ๊ฒ์
๋๋ค.
์ ๊ฑฐํ๋ ค๋ฉด ํด๋น ํ์ผ์ ํธ์งํ์ง ๋ง๊ณ ๋ค๋ฅธ ์์น์์ ์ ๋๋ก ์์ฑ๋์๋์ง ํ์ธํ์ญ์์ค.
/etc/systemd/network
์๋์ ๋ชจ๋ ํ์ผ์ ํ์ธํ๊ณ ๋ค์๊ณผ ๊ฐ์ ๋ ์ฝ๋๋ฅผ ์ฐพ์ผ๋ฉด
DNS=127.0.0.1
๊ทธ ๊ธฐ๋ก์ ์ญ์ ํฉ๋๋ค. ๋ํ /etc/systemd/resolved.conf
๋ฅผ ํ์ธํ๊ณ ํ์ํ ๊ฒฝ์ฐ ๋์ผํ ์์
์ ์ํํฉ๋๋ค. ๋ค์๊ณผ ๊ฐ์ด ์ ์ด๋ ํ๋ ๋๋ ๋ ๊ฐ์ DNS ์๋ฒ๊ฐ ๊ตฌ์ฑ๋์ด ์๋์ง ํ์ธํ์ญ์์ค.
DNS=1.1.1.1 1.0.0.1
๋ชจ๋ ์์
์ ์ํํ ํ ๋ณ๊ฒฝ ์ฌํญ์ ์ ์ฉํ๋ ค๋ฉด systemd ์๋น์ค๋ฅผ ๋ค์ ์์ํ์ธ์.
systemctl ๋ค์ ์์ systemd-networkd systemd-resolved
๊ทธ๋ฐ ๋ค์ resolv.conf
ํ์ผ์ DNS=127.0.0.1
๊ฐ ๋ ์ด์ ์๋์ง ํ์ธํฉ๋๋ค.
cat /run/systemd/resolve/resolv.conf
๋ง์ง๋ง์ผ๋ก DNS ํฌ๋์ ์ฌ์์ฑ์ ํธ๋ฆฌ๊ฑฐํฉ๋๋ค.
kubectl -n kube-system delete pod -l k8s-app=kube-dns
์์ฝ: ์๋ฃจ์ ์๋ ํธ์คํธ DNS ๊ตฌ์ฑ์์ DNS ์กฐํ ๋ฃจํ์ฒ๋ผ ๋ณด์ด๋ ๊ฒ์ ์ ๊ฑฐํ๋ ๊ฒ์ด ํฌํจ๋ฉ๋๋ค. ๋จ๊ณ๋ resolv.conf ๊ด๋ฆฌ์/๊ตฌํ๋ง๋ค ๋ค๋ฆ ๋๋ค.
๊ฐ์ฌ ํด์. CoreDNS ๋ฃจํ ํ๋ฌ๊ทธ์ธ ์ถ๊ฐ ์ ๋ณด ์์๋ ๋ค๋ฃน๋๋ค...
๋๋ ๊ฐ์ ๋ฌธ์ ๊ฐ ์๊ณ ๋ ๋ค๋ฅธ ๋ฌธ์ ๊ฐ ์์ต๋๋ค.
1. dns๋ฅผ ์ฐพ์ ์ ์์์ ์๋ฏธํฉ๋๋ค. ์ค๋ฅ๋
[์ค๋ฅ] ํ๋ฌ๊ทธ์ธ/์ค๋ฅ: 2 2115717704248378980.1120568170924441806. ํํธ: ์ฐ๊ฒฐํ ์ ์๋ ๋ฐฑ์๋: udp 10.224.0.3:57088->8.8.8.8:53 ์ฝ๊ธฐ: i/o ์๊ฐ ์ด๊ณผ
[์ค๋ฅ] ํ๋ฌ๊ทธ์ธ/์ค๋ฅ: 2 2115717704248378980.1120568170924441806. ํํธ: ์ฐ๊ฒฐํ ์ ์๋ ๋ฐฑ์๋: ์ฝ๊ธฐ udp 10.224.0.3:38819->172.16.254.1:53: i/o ์๊ฐ ์ด๊ณผ
........
๋ด /etc/resolv.com
๊ฐ์ง๊ณ ์์ง ์๋ค
๋ค์์๋ฒ 172.16.254.1 #์ด๊ฒ์ ๋ด DNS์
๋๋ค
๋ค์์๋ฒ 8.8.8.8 #๋ท์ ๋ ๋ค๋ฅธ dns
๋ ๋ฌ๋ฆฐ๋ค
kubectl -n kube-system ๋ฐฐํฌ coredns -o yaml ๊ฐ์ ธ์ค๊ธฐ | \
sed '/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \
kubectl ์ ์ฉ -f -
๊ทธ๋ฐ ๋ค์ ํฌ๋ ์ฌ๊ตฌ์ถ์๋ ํ๋์ ์ค๋ฅ๋ง ์์ต๋๋ค.
[์ค๋ฅ] ํ๋ฌ๊ทธ์ธ/์ค๋ฅ: 2 10594135170717325.8545646296733374240. ํํธ: ์ฐ๊ฒฐํ ์ ์๋ ๋ฐฑ์๋: ์ ์คํธ๋ฆผ ํธ์คํธ๊ฐ ์์ต๋๋ค.
๊ทธ๊ฒ ์ ์์ธ์ง ๋ชจ๋ฅด๊ฒ ์ต๋๋ค. ์๋ง๋
2, coredns๊ฐ ๋ด API ์๋น์ค๋ฅผ ์ฐพ์ ์ ์์ต๋๋ค. ์ค๋ฅ๋
kube-dns *v1.Endpoints getsockopt๋ฅผ ๋์ดํ์ง ๋ชปํ์ต๋๋ค: 10.96.0.1:6443 API ์ฐ๊ฒฐ์ด ๊ฑฐ๋ถ๋์์ต๋๋ค.
coredns๊ฐ ๊ณ์ํด์ ๋ค์ ์์๋๊ณ ๋ง์นจ๋ด CrashLoopBackOff๊ฐ ๋ฉ๋๋ค.
๊ทธ๋์ ๋ง์คํฐ ๋ ธ๋์์ coredns๋ฅผ ์คํํด์ผ ํฉ๋๋ค.
kubectl ํธ์ง ๋ฐฐํฌ/coredns --namespace=kube-system
์คํ.ํ
ํ๋ฆฟ.์คํ
๋
ธ๋ ์ ํ๊ธฐ:
node-role.kubernetes.io/master: ""
๊ทธ๊ฒ ์ ์์ธ์ง ๋ชจ๋ฅด๊ฒ ์ด
๋ง์ง๋ง์ผ๋ก ๋ด env๋ฅผ ์ ๊ณต
๋ฆฌ๋ ์ค 4.20.10-1.el7.elrepo.x86_64 /// ์ผํ ์ค 7
๋์ปค ๋ฒ์ : 18.09.3
[ root@k8smaster00 ~]# ๋์ปค ์ด๋ฏธ์ง ls -a
์ ์ฅ์ ํ๊ทธ ์ด๋ฏธ์ง ID ์์ฑ๋ ํฌ๊ธฐ
k8s.gcr.io/kube-controller-manager v1.13.3 0482f6400933 6์ฃผ ์ 146MB
k8s.gcr.io/kube-proxy v1.13.3 98db19758ad4 6์ฃผ ์ 80.3MB
k8s.gcr.io/kube-apiserver v1.13.3 fe242e556a99 6์ฃผ ์ 181MB
k8s.gcr.io/kube-scheduler v1.13.3 3a6f709e97a0 6์ฃผ ์ 79.6MB
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 7์ฃผ ์ 52.6MB
k8s.gcr.io/coredns 1.2.6 f59dcacceff4 4๊ฐ์ ์ 40MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 6 ๊ฐ์ ์ 220MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 15๊ฐ์ ์ 742kB
์ฟ ๋ฒ ๋ท์ 1.13.3์ ๋๋ค.
์ด๊ฒ์ ๋ฒ๊ทธ๋ผ๊ณ ์๊ฐํฉ๋๋ค. ๊ณต์ ์ ๋ฐ์ดํธ ๋๋ ์๋ฃจ์ ์ ๊ธฐ๋ํ์ญ์์ค
๋๋ ๊ฐ์ ๋ฌธ์ ๊ฐ ์์ต๋๋ค ...
@mengxifl , ์ด๋ฌํ ์ค๋ฅ๋ ์ด๋ฒ ํธ์์ ๋ณด๊ณ ๋๊ณ ๋ ผ์๋ ์ค๋ฅ์ ํฌ๊ฒ ๋ค๋ฆ ๋๋ค.
[์ค๋ฅ] ํ๋ฌ๊ทธ์ธ/์ค๋ฅ: 2 2115717704248378980.1120568170924441806. ํํธ: ์ฐ๊ฒฐํ ์ ์๋ ๋ฐฑ์๋: udp 10.224.0.3:57088->8.8.8.8:53 ์ฝ๊ธฐ: i/o ์๊ฐ ์ด๊ณผ
[์ค๋ฅ] ํ๋ฌ๊ทธ์ธ/์ค๋ฅ: 2 2115717704248378980.1120568170924441806. ํํธ: ์ฐ๊ฒฐํ ์ ์๋ ๋ฐฑ์๋: ์ฝ๊ธฐ udp 10.224.0.3:38819->172.16.254.1:53: i/o ์๊ฐ ์ด๊ณผ
์ด๋ฌํ ์ค๋ฅ๋ CoreDNS ํฌ๋(๋ฐ ์๋ง๋ ๋ค๋ฅธ ๋ชจ๋ ํฌ๋)๊ฐ ๋ค์์๋ฒ์ ๋๋ฌํ ์ ์์์ ์๋ฏธํฉ๋๋ค. ์ด๋ ํด๋ฌ์คํฐ์ ๋คํธ์ํน ๋ฌธ์ ๋ฅผ ์ธ๋ถ ์ธ๊ณ์ ์์ํฉ๋๋ค. ํ๋๋ฌ ๊ตฌ์ฑ์ด ์๋ชป๋์๊ฑฐ๋ ๋ฐฉํ๋ฒฝ์ด ์์ ์ ์์ต๋๋ค.
coredns์์ ๋ด API ์๋น์ค๋ฅผ ์ฐพ์ ์ ์์ต๋๋ค...
๊ทธ๋์ ๋ง์คํฐ ๋ ธ๋์์ coredns๋ฅผ ์คํํด์ผ ํฉ๋๋ค.
์ด๊ฒ๋ ์ ์์ด ์๋๋๋ค. ๋ด๊ฐ ๋น์ ์ ์ฌ๋ฐ๋ฅด๊ฒ ์ดํดํ๋ค๋ฉด CoreDNS๊ฐ ๋ง์คํฐ ๋ ธ๋์์ API์ ์ฐ๊ฒฐํ ์ ์์ง๋ง ๋ค๋ฅธ ๋ ธ๋์์๋ ์ฐ๊ฒฐํ ์ ์๋ค๊ณ ๋งํ๋ ๊ฒ์ ๋๋ค. ์ด๊ฒ์ ํด๋ฌ์คํฐ ๋ด์ ๋ ธ๋ ๊ฐ์ ๋คํธ์ํน ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ํฌ๋๋ฅผ ์ ์ํฉ๋๋ค. ์๋ง๋ ํ๋๋ฌ ๊ตฌ์ฑ ๋๋ ๋ฐฉํ๋ฒฝ ๋ฌธ์ ์ผ ์ ์์ต๋๋ค.
๋๋ ๊ฐ์ ๋ฌธ์ ๊ฐ ์์ต๋๋ค ...
@mengxifl , ์ด๋ฌํ ์ค๋ฅ๋ ์ด๋ฒ ํธ์์ ๋ณด๊ณ ๋๊ณ ๋ ผ์๋ ์ค๋ฅ์ ํฌ๊ฒ ๋ค๋ฆ ๋๋ค.
[์ค๋ฅ] ํ๋ฌ๊ทธ์ธ/์ค๋ฅ: 2 2115717704248378980.1120568170924441806. ํํธ: ์ฐ๊ฒฐํ ์ ์๋ ๋ฐฑ์๋: udp 10.224.0.3:57088->8.8.8.8:53 ์ฝ๊ธฐ: i/o ์๊ฐ ์ด๊ณผ
[์ค๋ฅ] ํ๋ฌ๊ทธ์ธ/์ค๋ฅ: 2 2115717704248378980.1120568170924441806. ํํธ: ์ฐ๊ฒฐํ ์ ์๋ ๋ฐฑ์๋: ์ฝ๊ธฐ udp 10.224.0.3:38819->172.16.254.1:53: i/o ์๊ฐ ์ด๊ณผ์ด๋ฌํ ์ค๋ฅ๋ CoreDNS ํฌ๋(๋ฐ ์๋ง๋ ๋ค๋ฅธ ๋ชจ๋ ํฌ๋)๊ฐ ๋ค์์๋ฒ์ ๋๋ฌํ ์ ์์์ ์๋ฏธํฉ๋๋ค. ์ด๋ ํด๋ฌ์คํฐ์ ๋คํธ์ํน ๋ฌธ์ ๋ฅผ ์ธ๋ถ ์ธ๊ณ์ ์์ํฉ๋๋ค. ํ๋๋ฌ ๊ตฌ์ฑ์ด ์๋ชป๋์๊ฑฐ๋ ๋ฐฉํ๋ฒฝ์ด ์์ ์ ์์ต๋๋ค.
coredns์์ ๋ด API ์๋น์ค๋ฅผ ์ฐพ์ ์ ์์ต๋๋ค...
๊ทธ๋์ ๋ง์คํฐ ๋ ธ๋์์ coredns๋ฅผ ์คํํด์ผ ํฉ๋๋ค.์ด๊ฒ๋ ์ ์์ด ์๋๋๋ค. ๋ด๊ฐ ๋น์ ์ ์ฌ๋ฐ๋ฅด๊ฒ ์ดํดํ๋ค๋ฉด CoreDNS๊ฐ ๋ง์คํฐ ๋ ธ๋์์ API์ ์ฐ๊ฒฐํ ์ ์์ง๋ง ๋ค๋ฅธ ๋ ธ๋์์๋ ์ฐ๊ฒฐํ ์ ์๋ค๊ณ ๋งํ๋ ๊ฒ์ ๋๋ค. ์ด๊ฒ์ ํด๋ฌ์คํฐ ๋ด์ ๋ ธ๋ ๊ฐ์ ๋คํธ์ํน ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ํฌ๋๋ฅผ ์ ์ํฉ๋๋ค. ์๋ง๋ ํ๋๋ฌ ๊ตฌ์ฑ ๋๋ ๋ฐฉํ๋ฒฝ ๋ฌธ์ ์ผ ์ ์์ต๋๋ค.
๋น์ ์ ๋ต๋ณ์ ๊ฐ์ฌ๋๋ฆฝ๋๋ค
์๋ง๋ ๋ด yaml ํ์ผ์ ์ฌ๋ ค์ผ ํ ๊ฒ์ ๋๋ค
๋๋ ์ฌ์ฉํ๋ค
kubeadm ์ด๊ธฐํ --config=config.yaml
๋ด config.yaml ๋ด์ฉ์
apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
apiEndpoint:
advertiseAddress: "172.16.254.74"
bindPort: 6443
---
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
kubernetesVersion: "v1.13.3"
etcd:
external:
endpoints:
- "https://172.16.254.86:2379"
- "https://172.16.254.87:2379"
- "https://172.16.254.88:2379"
caFile: /etc/kubernetes/pki/etcd/ca.pem
certFile: /etc/kubernetes/pki/etcd/client.pem
keyFile: /etc/kubernetes/pki/etcd/client-key.pem
networking:
podSubnet: "10.224.0.0/16"
serviceSubnet: "10.96.0.0/12"
apiServerCertSANs:
- k8smaster00
- k8smaster01
- k8snode00
- k8snode01
- 172.16.254.74
- 172.16.254.79
- 172.16.254.80
- 172.16.254.81
- 172.16.254.85 #Vip
- 127.0.0.1
clusterName: "cluster"
controlPlaneEndpoint: "172.16.254.85:6443"
apiServerExtraArgs:
service-node-port-range: 20-65535
๋ด fannel yaml์ ๊ธฐ๋ณธ์ ๋๋ค
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
systemctl status firewalld
๋ชจ๋ ๋
ธ๋ ๋ง
์ฅ์น firewalld.service๋ฅผ ์ฐพ์ ์ ์์ต๋๋ค.
cat /etc/sysconfig/iptables
๋ชจ๋ ๋
ธ๋ ๋ง
*ํํฐ
:์
๋ ฅ ์๋ฝ [0:0]
:์์ผ๋ก ์๋ฝ [0:0]
:์ถ๋ ฅ ์น์ธ [0:0]
-A ์
๋ ฅ -p tcp -m tcp --dport 1:65535 -j ์๋ฝ
-A INPUT -m ์ํ --state RELATED,ESTABLISHED -j ACCEPT
-A ์
๋ ฅ -p icmp -j ์๋ฝ
-A ์
๋ ฅ -i lo -j ์๋ฝ
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A ์ถ๋ ฅ -p tcp -m tcp --sport 1:65535 -j ์๋ฝ
-A ์์ผ๋ก -p tcp -m tcp --dport 1:65535 -j ์๋ฝ
-A ์์ผ๋ก -p tcp -m tcp --sport 1:65535 -j ์๋ฝ
COMMI
cat /etc/resolv.conf & ping bing.com
๋ชจ๋ ๋
ธ๋ ๋ง
[1] 6330
๋ค์์๋ฒ 172.16.254.1
๋ค์์๋ฒ 8.8.8.8
PING bing.com(13.107.21.200) 56(84) ๋ฐ์ดํธ ๋ฐ์ดํฐ.
13.107.21.200(13.107.21.200)์ 64๋ฐ์ดํธ: icmp_seq=2 ttl=111 ์๊ฐ=149ms
uname -rs
๋ง์คํฐ ๋
ธ๋ ๋ง
๋ฆฌ๋
์ค 4.20.10-1.el7.elrepo.x86_64
uname -rs
์ฌ๋ ์ด๋ธ ๋
ธ๋ ๋ง
๋ฆฌ๋
์ค 4.4.176-1.el7.elrepo.x86_64
๊ทธ๋์ ๋ฐฉํ๋ฒฝ์ด mybe fannel์ ๋ฌธ์ ๊ฐ ์๋ค๊ณ ์๊ฐํ์ง ์์ต๋๊น? ํ์ง๋ง ๊ธฐ๋ณธ ๊ตฌ์ฑ์ ์ฌ์ฉํฉ๋๋ค. ๊ทธ๋ฆฌ๊ณ ์๋ง๋ ๋ฆฌ๋ ์ค ๋ฒ์ . ๋ชจ๋ฅด๊ฒ ์ด์ .
์ข์, ๋๋ ๋ฌ๋ฆฐ๋ค
/sbin/iptables -t nat -I POSTROUTING -s 10.224.0.0/16 -j MASQUERADE
๋๋ฅผ ์ํด ์๋ํ๋ ๋ชจ๋ ๋ ธ๋์์. ๊ฐ์ฌ ํด์
๊ฐ์ฅ ์ ์ฉํ ๋๊ธ
๊ฐ์ฌํฉ๋๋ค, @chrisohaver !
์ด๊ฒ์ ํจ๊ณผ๊ฐ ์์๋ค: