Kubeadm: CoreDNS๊ฐ€ Ubuntu 18.04์—์„œ ์‹œ์ž‘๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค .Bionic Beaver

์— ๋งŒ๋“  2018๋…„ 07์›” 09์ผ  ยท  18์ฝ”๋ฉ˜ํŠธ  ยท  ์ถœ์ฒ˜: kubernetes/kubeadm

์ด ๋ฌธ์ œ๋ฅผ ์ œ์ถœํ•˜๊ธฐ ์ „์— kubeadm ๋ฌธ์ œ์—์„œ ์–ด๋–ค ํ‚ค์›Œ๋“œ๋ฅผ ๊ฒ€์ƒ‰ ํ–ˆ์Šต๋‹ˆ๊นŒ?

dns, resolv.conf, coredns

๋ฒ„๊ทธ ๋ณด๊ณ ์„œ

๋ฒ„์ „

kubeadm ๋ฒ„์ „ ( kubeadm version ) :
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:14:41Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}

ํ™˜๊ฒฝ :

  • Kubernetes ๋ฒ„์ „ ( kubectl version ) :
  Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
  Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
  • ํด๋ผ์šฐ๋“œ ์ œ๊ณต ์—…์ฒด ๋˜๋Š” ํ•˜๋“œ์›จ์–ด ๊ตฌ์„ฑ :
    ๋ฒ ์–ด ๋ฉ”ํƒˆ (Intel Xeon, 2x2TB HDD, 32GB RAM)
  • OS (์˜ˆ : / etc / os-release) :
    Ubuntu 18.04 LTS (Bionic Beaver)
  • ์ปค๋„ (์˜ˆ : uname -a ) :
    4.15.0-24- ์ผ๋ฐ˜

์–ด๋–ป๊ฒŒ ๋œ ๊ฑฐ์˜ˆ์š”?

kubeadm์„ ํ†ตํ•ด kubernetes๋ฅผ ์„ค์น˜ ํ•œ ํ›„ coredns ํฌ๋“œ๊ฐ€ ํ‘œ์‹œ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. kubectl get pods --all-namespaces ๋Š” ๋‹ค์Œ์„ ์ธ์‡„ํ•ฉ๋‹ˆ๋‹ค.

NAMESPACE     NAME                                  READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-kgg8d              0/1       Pending   0          2h
kube-system   coredns-78fcdf6894-vl9jf              0/1       Pending   0          2h
kube-system   etcd-beetlejuice                      1/1       Running   0          2h
kube-system   kube-apiserver-beetlejuice            1/1       Running   0          2h
kube-system   kube-controller-manager-beetlejuice   1/1       Running   0          2h
kube-system   kube-proxy-bjdqd                      1/1       Running   0          2h
kube-system   kube-scheduler-beetlejuice            1/1       Running   0          2h

๋ฌด์Šจ ์ผ์ด ์ผ์–ด๋‚˜๊ธฐ๋ฅผ ๊ธฐ๋Œ€ ํ–ˆ์Šต๋‹ˆ๊นŒ?

coredns๊ฐ€ '์‹คํ–‰ ์ค‘'์ƒํƒœ๋กœ ๋ณ€๊ฒฝ๋˜๊ณ  kubernetes๊ฐ€ ๋ฌธ์ œ์—†์ด ์‹คํ–‰ ์ค‘์ž…๋‹ˆ๋‹ค.

๊ทธ๊ฒƒ์„ ์žฌํ˜„ํ•˜๋Š” ๋ฐฉ๋ฒ• (๊ฐ€๋Šฅํ•œ ํ•œ ์ตœ์†Œํ•œ์œผ๋กœ ์ •ํ™•ํ•˜๊ฒŒ)?

๋‹ค์Œ์€ ์„ค์น˜์— ์‚ฌ์šฉํ•œ ์Šคํฌ๋ฆฝํŠธ์ž…๋‹ˆ๋‹ค.

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

apt-get update
apt-get install -y docker.io
apt-get install -y kubeadm

kubeadm init --pod-network-cidr=10.27.0.0/16

์šฐ๋ฆฌ๊ฐ€ ์•Œ์•„์•ผ ํ•  ๋‹ค๋ฅธ ๊ฒƒ์ด ์žˆ์Šต๋‹ˆ๊นŒ?

๊ทธ๋ ‡๋‹ค๊ณ  ์ƒ๊ฐํ•˜์ง€๋งŒ ๋ญ”์ง€ ๋ชจ๋ฅด๊ฒ ์–ด์š” .. ์–ด๋–ค ์ข…๋ฅ˜์˜ ๋กœ๊ทธ๊ฐ€ ํ•„์š”ํ•˜๋ฉด ๋งํ•ด์ฃผ์„ธ์š”.

kinbug prioritawaiting-more-evidence prioritimportant-soon

๊ฐ€์žฅ ์œ ์šฉํ•œ ๋Œ“๊ธ€

์Šค์ผ€์ค„๋Ÿฌ์˜ ๋กœ๊ทธ๊ฐ€ ์—†๋‹ค๋Š” ๊ฒƒ์ด ๋„ˆ๋ฌด ๋‚˜์ฉ๋‹ˆ๋‹ค.

/var/lib/kubelet/kubeadm-flags.env์˜ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.

--resolv-conf ํ”Œ๋ž˜๊ทธ๊ฐ€ ์ถ”๊ฐ€ ๋œ ๊ฒƒ ๊ฐ™์œผ๋ฏ€๋กœ ๊ดœ์ฐฎ์Šต๋‹ˆ๋‹ค.

์ด์ œ๋Š” ๋‹จ์ผ ๋…ธ๋“œ kube๊ฐ€ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค.

kubeadm reset ์ „ํ™” ํ•œ ๋‹ค์Œ kubeadm init ... ๋‹ค์‹œ ์ „ํ™”ํ•˜์„ธ์š”.
๊ทธ๋Ÿฐ ๋‹ค์Œ ๊ตฌ์„ฑ์„ ์‚ฌ์šฉ์ž ๋””๋ ‰ํ† ๋ฆฌ์— ๋ณต์‚ฌํ•˜๊ณ  ํฌ๋“œ ๋„คํŠธ์›Œํฌ ํ”Œ๋Ÿฌ๊ทธ์ธ (weave)์„ ์„ค์น˜ํ•ด๋ณด์‹ญ์‹œ์˜ค.

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

DNS ํฌ๋“œ๊ฐ€ ์ค€๋น„๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ๋ง์ด ์•ˆ ๋  ์ˆ˜๋„ ์žˆ์ง€๋งŒ ์‹œ๋„ํ•ด๋ณด์„ธ์š”.

๋ชจ๋“  18 ๋Œ“๊ธ€

๋ฟก๋ฟก
์ถœ๋ ฅ :
kubectl describe pod <coredns-pod-ids>

๋ฐ ๊ด€๋ จ ์˜ค๋ฅ˜ :

'systemctl status kubelet'
'journalctl -xeu kubelet'

๋” ๋งŽ์€ ๋„์›€์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค.
๊ฐ์‚ฌ.

kubectl describe pod coredns-78fcdf6894-kgg8d -n kube-system :

Name:           coredns-78fcdf6894-kgg8d
Namespace:      kube-system
Node:           <none>
Labels:         k8s-app=kube-dns
                pod-template-hash=3497892450
Annotations:    <none>
Status:         Pending
IP:             
Controlled By:  ReplicaSet/coredns-78fcdf6894
Containers:
coredns:
    Image:       k8s.gcr.io/coredns:1.1.3
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
    -conf
    /etc/coredns/Corefile
    Limits:
    memory:  170Mi
    Requests:
    cpu:        100m
    memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
    /etc/coredns from config-volume (ro)
    /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-4fqm7 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
coredns-token-4fqm7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-4fqm7
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                node-role.kubernetes.io/master:NoSchedule
                node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  4m (x1436 over 4h)  default-scheduler  0/1 nodes are available: 1 node(s) were not ready.

kubectl describe pod coredns-78fcdf6894-vl9jf -n kube-system :

Name:           coredns-78fcdf6894-vl9jf
Namespace:      kube-system
Node:           <none>
Labels:         k8s-app=kube-dns
                pod-template-hash=3497892450
Annotations:    <none>
Status:         Pending
IP:             
Controlled By:  ReplicaSet/coredns-78fcdf6894
Containers:
coredns:
    Image:       k8s.gcr.io/coredns:1.1.3
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
    -conf
    /etc/coredns/Corefile
    Limits:
    memory:  170Mi
    Requests:
    cpu:        100m
    memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
    /etc/coredns from config-volume (ro)
    /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-4fqm7 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
coredns-token-4fqm7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-4fqm7
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                node-role.kubernetes.io/master:NoSchedule
                node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  1m (x1467 over 4h)  default-scheduler  0/1 nodes are available: 1 node(s) were not ready.

systemctl status kubelet ์˜ค๋ฅ˜๊ฐ€ ํ‘œ์‹œ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋‹ค์Œ์€ ์ „์ฒด ์ถœ๋ ฅ์ž…๋‹ˆ๋‹ค.

โ— kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
        โ””โ”€10-kubeadm.conf, override.conf
Active: active (running) since Mon 2018-07-09 17:43:53 CEST; 4h 7min ago
    Docs: http://kubernetes.io/docs/
Main PID: 26710 (kubelet)
    Tasks: 32 (limit: 4915)
CGroup: /system.slice/kubelet.service
        โ””โ”€26710 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-co

journalctl -xeu kubelet ๋‹ค์Œ ์ค„์„ ์—ฌ๋Ÿฌ ๋ฒˆ ๋ด…๋‹ˆ๋‹ค.

Jul 09 21:54:48 beetlejuice kubelet[26710]: E0709 21:54:48.883071   26710 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninit
Jul 09 21:54:49 beetlejuice kubelet[26710]: E0709 21:54:49.566069   26710 dns.go:131] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 213.133.99.99 213.133.98.98 213.133.100.100
Jul 09 21:54:53 beetlejuice kubelet[26710]: W0709 21:54:53.884846   26710 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d

์ฐธ๊ณ ๋กœ /etc/resolv.conf :

### Hetzner Online GmbH installimage
# nameserver config
nameserver 213.133.99.99
nameserver 213.133.98.98
nameserver 213.133.100.100
nameserver 2a01:4f8:0:1::add:1010
nameserver 2a01:4f8:0:1::add:9999
nameserver 2a01:4f8:0:1::add:9898

์—ฌ๊ธฐ์„œ Hetzner๋Š” ๋ฐ์ดํ„ฐ ์„ผํ„ฐ ์šด์˜์ž์˜ ์ด๋ฆ„์ž…๋‹ˆ๋‹ค.

๋ฟก๋ฟก
์Šค์ผ€์ค„๋Ÿฌ๊ฐ€ ์‹คํŒจํ•˜์ง€๋งŒ ์ด์œ ๊ฐ€ ๋ช…ํ™•ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.

kube-scheduler ํฌ๋“œ์˜ ID๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
kubectl get pods --all-namespace

๊ทธ๋Ÿฐ ๋‹ค์Œ ํ•ด๋‹น ํฌ๋“œ์—์„œ ํ„ฐ๋ฏธ๋„์„ ์‹œ์ž‘ํ•˜๊ธฐ ์œ„ํ•ด ์ด๊ฒƒ์„ ์‹œ๋„ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
kubectl exec -ti <POD-ID-HERE> bash -n kube-system

๊ฑฐ๊ธฐ์—์„œ ๋กœ๊ทธ๋ฅผ ์‚ดํŽด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
cat /var/log

๊ด€๋ จ์„ฑ์ด ์žˆ๋‹ค๋Š” ๋ณด์žฅ์€ ์—†์Šต๋‹ˆ๋‹ค.

๋‹ค๋ฅธ ๋ช‡ ๊ฐ€์ง€ :

  • init ์ดํ›„์— pod-network๋ฅผ ์„ค์น˜ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๊นŒ (๊ฒฝ์šฐ์— ๋”ฐ๋ผ ์งˆ๋ฌธ)?
  • ํ›„ init ๋‹น์‹ ์ด ์–ป๋Š” ๋ฌด์Šจ ๋‚ด์šฉ /var/lib/kubelet/kubeadm-flags.env ?

kubectl exec -ti kube-scheduler-beetlejuice bash -n kube-system ๋ผ๊ณ 

OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown command terminated with exit code 126

๊ทธ๋ž˜์„œ kubectl exec -ti kube-scheduler-beetlejuice sh -n kube-system ์‹œ๋„ ํ–ˆ์œผ๋ฏ€๋กœ sh ๋Œ€์‹  bash ...ํ•˜์ง€๋งŒ ํŒŒ์ผ์ด ์—†์Šต๋‹ˆ๋‹ค /var/log

ls -lAh /var :

drwxr-xr-x    3 root     root        4.0K May 22 17:00 spool
drwxr-xr-x    2 root     root        4.0K May 22 17:00 www

์ด์ œ๋Š” ๋‹จ์ผ ๋…ธ๋“œ kube๊ฐ€ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค.

/var/lib/kubelet/kubeadm-flags.env ์˜ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.

KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf

์Šค์ผ€์ค„๋Ÿฌ์˜ ๋กœ๊ทธ๊ฐ€ ์—†๋‹ค๋Š” ๊ฒƒ์ด ๋„ˆ๋ฌด ๋‚˜์ฉ๋‹ˆ๋‹ค.

/var/lib/kubelet/kubeadm-flags.env์˜ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.

--resolv-conf ํ”Œ๋ž˜๊ทธ๊ฐ€ ์ถ”๊ฐ€ ๋œ ๊ฒƒ ๊ฐ™์œผ๋ฏ€๋กœ ๊ดœ์ฐฎ์Šต๋‹ˆ๋‹ค.

์ด์ œ๋Š” ๋‹จ์ผ ๋…ธ๋“œ kube๊ฐ€ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค.

kubeadm reset ์ „ํ™” ํ•œ ๋‹ค์Œ kubeadm init ... ๋‹ค์‹œ ์ „ํ™”ํ•˜์„ธ์š”.
๊ทธ๋Ÿฐ ๋‹ค์Œ ๊ตฌ์„ฑ์„ ์‚ฌ์šฉ์ž ๋””๋ ‰ํ† ๋ฆฌ์— ๋ณต์‚ฌํ•˜๊ณ  ํฌ๋“œ ๋„คํŠธ์›Œํฌ ํ”Œ๋Ÿฌ๊ทธ์ธ (weave)์„ ์„ค์น˜ํ•ด๋ณด์‹ญ์‹œ์˜ค.

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

DNS ํฌ๋“œ๊ฐ€ ์ค€๋น„๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ๋ง์ด ์•ˆ ๋  ์ˆ˜๋„ ์žˆ์ง€๋งŒ ์‹œ๋„ํ•ด๋ณด์„ธ์š”.

์ƒˆ๋กœ์šด ์ดˆ๊ธฐํ™” ํฌ๋“œ๊ฐ€ (๋‹ค์‹œ) ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋ณด์ž…๋‹ˆ๋‹ค.

NAMESPACE     NAME                                  READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-lcmg6              0/1       Pending   0          1m
kube-system   coredns-78fcdf6894-wd9nt              0/1       Pending   0          1m
kube-system   etcd-beetlejuice                      1/1       Running   0          18s
kube-system   kube-apiserver-beetlejuice            1/1       Running   0          36s
kube-system   kube-controller-manager-beetlejuice   1/1       Running   0          12s
kube-system   kube-proxy-zrhgj                      1/1       Running   0          1m
kube-system   kube-scheduler-beetlejuice            1/1       Running   0          24s

weave ํ”Œ๋Ÿฌ๊ทธ์ธ์„ ์„ค์น˜ํ•˜๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.

NAMESPACE     NAME                                  READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-lcmg6              1/1       Running   0          2m
kube-system   coredns-78fcdf6894-wd9nt              1/1       Running   0          2m
kube-system   etcd-beetlejuice                      1/1       Running   0          1m
kube-system   kube-apiserver-beetlejuice            1/1       Running   0          1m
kube-system   kube-controller-manager-beetlejuice   1/1       Running   0          58s
kube-system   kube-proxy-zrhgj                      1/1       Running   0          2m
kube-system   kube-scheduler-beetlejuice            1/1       Running   0          1m
kube-system   weave-net-ldxg5                       2/2       Running   0          24s

์ด์ œ ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค ๐Ÿ‘

์ง€๊ธˆ์€ ๋ฌธ์ œ์—†์ด kubernetes-dashboard ์„ค์น˜ํ–ˆ๋Š”๋ฐ ์ด์ „์—๋Š” ์ž‘๋™ํ•˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๊ฒƒ์ด์ด ๋ฌธ์ œ์— ๋Œ€ํ•œ ํ•ด๊ฒฐ์ฑ…์ž…๋‹ˆ๊นŒ?

ํ…Œ์ŠคํŠธ ํ•ด์ฃผ์…”์„œ ๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค.

๊ทธ๋Ÿฌ๋‚˜ ์ด๊ฒƒ์ด์ด ๋ฌธ์ œ์— ๋Œ€ํ•œ ํ•ด๊ฒฐ์ฑ…์ž…๋‹ˆ๊นŒ?

๋‚˜๋Š” ๋งํ•  ๊ฒƒ์ด๋‹ค-์˜ˆ, ์ง€๊ธˆ์€.

CLI์™€ ์„ค๋ช…์„œ ๋ชจ๋‘ ์‚ฌ์šฉ์ž์—๊ฒŒ init ๋ฐ”๋กœ ๋’ค์— pod-network ํ”Œ๋Ÿฌ๊ทธ์ธ์„ ์„ค์น˜ํ•˜๋„๋ก ์ง€์‹œํ•ฉ๋‹ˆ๋‹ค.
์ด ๋‹จ๊ณ„๋ฅผ ๊ฑด๋„ˆ ๋›ฐ๋Š” ๊ฒฝ์šฐ ๋ฐœ์ƒํ•˜๋Š” ์ƒํ™ฉ์„ ์ •ํ™•ํžˆ ๋ฌธ์„œํ™”ํ•˜์ง€ ์•Š์•˜์ง€๋งŒ ํด๋Ÿฌ์Šคํ„ฐ๊ฐ€ ์ œ๋Œ€๋กœ ์ž‘๋™ํ•˜์ง€ ์•Š์„ ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋ฉ๋‹ˆ๋‹ค.

๋ˆ„๊ตฐ๊ฐ€์ด ๋ฌธ์ œ๋ฅผ ์ข…๊ฒฐํ•ด์„œ๋Š” ์•ˆ๋œ๋‹ค๊ณ  ์ƒ๊ฐํ•˜๋ฉด ๋‹ค์‹œ ์—ด์–ด์ฃผ์„ธ์š”.

์ง์กฐ ํ”Œ๋Ÿฌ๊ทธ์ธ์ด ํŠธ๋ฆญ์„ ์ˆ˜ํ–‰ํ•˜์ง€ ์•Š์•˜์ง€๋งŒ ์ •ํ™•ํ•œ ๋ฌธ์ œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ContainerCreating ์ƒํƒœ์—์„œ coredns ํฌ๋“œ๊ฐ€ ๊ณ„์† ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค. ์ด์ œ ๊ฑฐ์˜ ํ•œ ์‹œ๊ฐ„์ด ์ง€๋‚ฌ์œผ๋ฏ€๋กœ ...

linux-uwkw:~ # kubectl cluster-info
Kubernetes master is running at https://192.168.178.163:6443
KubeDNS is running at https://192.168.178.163:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
linux-uwkw:~ # cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni
  • --resolv-conf ํ”Œ๋ž˜๊ทธ๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ๊ทธ๋ž˜์„œ ์ด๊ฒƒ์ด ๋ฌธ์ œ๋ผ๊ณ  ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค. kubeqdm์„ ๊ตฌ์„ฑํ•˜์—ฌ ์ถ”๊ฐ€ํ•˜๋ ค๋ฉด ์–ด๋–ป๊ฒŒํ•ด์•ผํ•ฉ๋‹ˆ๊นŒ?
  • --cni-bin-dir=/opt/cni/bin ์ด ์ž˜๋ชป๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋‚ด ์‹œ์Šคํ…œ์—๋Š” ์ด๊ฒƒ์ด ์—†์Šต๋‹ˆ๋‹ค.
linux-uwkw:~ # rpm -ql cni
/etc/cni
/etc/cni/net.d
/etc/cni/net.d/99-loopback.conf.sample
/usr/lib/cni
/usr/lib/cni/noop
/usr/sbin/cnitool
/usr/share/doc/packages/cni
/usr/share/doc/packages/cni/CONTRIBUTING.md
/usr/share/doc/packages/cni/DCO
/usr/share/doc/packages/cni/README.md
/usr/share/licenses/cni
/usr/share/licenses/cni/LICENSE

๋‚˜๋Š” ๊ฑฐ๊ธฐ์— /usr/sbin ๋ฅผ ๋„ฃ์–ด์•ผํ•œ๋‹ค๊ณ  ์ƒ๊ฐํ•œ๋‹ค, ๊ทธ๋ ‡์ง€?

๋˜ํ•œ ์Šค์ผ€์ค„๋Ÿฌ์˜ ๋กœ๊ทธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.

linux-uwkw:~ # docker logs k8s_kube-scheduler_kube-scheduler-linux-uwkw_kube-system_a00c35e56ebd0bdfcd77d53674a5d2a1_0
I0813 21:18:19.816990       1 server.go:126] Version: v1.11.2
W0813 21:18:19.821719       1 authorization.go:47] Authorization is disabled
W0813 21:18:19.821744       1 authentication.go:55] Authentication is disabled
I0813 21:18:19.821787       1 insecure_serving.go:47] Serving healthz insecurely on 127.0.0.1:10251
E0813 21:18:25.603025       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0813 21:18:25.603122       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0813 21:18:25.603161       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0813 21:18:25.603253       1 reflector.go:205] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:176: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0813 21:18:25.603286       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list storageclasses.storage.k8s.io at the cluster scope
E0813 21:18:25.603335       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0813 21:18:25.603364       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0813 21:18:25.603437       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0813 21:18:25.603491       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0813 21:18:25.605642       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
E0813 21:18:26.603723       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list persistentvolumes at the cluster scope
E0813 21:18:26.606225       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list services at the cluster scope
E0813 21:18:26.606295       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list replicationcontrollers at the cluster scope
E0813 21:18:26.607860       1 reflector.go:205] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:176: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list pods at the cluster scope
E0813 21:18:26.611457       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list storageclasses.storage.k8s.io at the cluster scope
E0813 21:18:26.612777       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list persistentvolumeclaims at the cluster scope
E0813 21:18:26.616076       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list statefulsets.apps at the cluster scope
E0813 21:18:26.616779       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list poddisruptionbudgets.policy at the cluster scope
E0813 21:18:26.619308       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.ReplicaSet: replicasets.extensions is forbidden: User "system:kube-scheduler" cannot list replicasets.extensions at the cluster scope
E0813 21:18:26.620048       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list nodes at the cluster scope
I0813 21:18:28.429769       1 controller_utils.go:1025] Waiting for caches to sync for scheduler controller
I0813 21:18:28.533687       1 controller_utils.go:1032] Caches are synced for scheduler controller
I0813 21:18:28.533868       1 leaderelection.go:185] attempting to acquire leader lease  kube-system/kube-scheduler...
I0813 21:18:28.539621       1 leaderelection.go:194] successfully acquired lease kube-system/kube-scheduler

์ •ํ™•ํ•œ ๋ฌธ์ œ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค

๋‚˜๋Š” ๊ทธ๊ฒƒ์ด ๊ฐ™์€ ๋ฌธ์ œ๋ผ๊ณ  ์ƒ๊ฐํ•˜์ง€ ์•Š๋Š”๋‹ค.

KubeDNS๋Š”

CoreDNS๋Š” 1.11.x์˜ ๊ธฐ๋ณธ DNS ์„œ๋ฒ„์ž…๋‹ˆ๋‹ค. ์˜๋„์ ์œผ๋กœ ํ™œ์„ฑํ™” ํ–ˆ์Šต๋‹ˆ๊นŒ?

๊ณ ์–‘์ด /var/lib/kubelet/kubeadm-flags.env

/var/lib/kubelet/kubeadm-flags.env ์€ kubeadm ๋Ÿฐํƒ€์ž„์—์„œ ์ž๋™ ์ƒ์„ฑ๋˜๋ฉฐ ํŽธ์ง‘ ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค.
/etc/default/kubelet ์— ํ”Œ๋ž˜๊ทธ๋ฅผ ์ถ”๊ฐ€ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

์—ฌ๊ธฐ์—์„œ ์ •๋ณด๋ฅผ ์ฐธ์กฐํ•˜์‹ญ์‹œ์˜ค.
https://github.com/kubernetes/kubernetes/blob/master/build/debs/10-kubeadm.conf

--resolv-conf ํ”Œ๋ž˜๊ทธ๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค.

๋ฐฐํฌํŒ์ด ๋‹ค์Œ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ์—๋งŒ ์ถ”๊ฐ€๋ฉ๋‹ˆ๋‹ค.
https://www.freedesktop.org/software/systemd/man/systemd-resolved.service.html

--cni-bin-dir = / opt / cni / bin --cni-conf-dir = / etc / cni / net.d

์ด๊ฒƒ์ด ๊ธฐ๋ณธ๊ฐ’์ด์ง€๋งŒ AFAIK๋Š” ๋Ÿฐํƒ€์ž„์— ์ž๋™ ์ถ”๊ฐ€๋˜์ง€ ์•Š์•„์•ผํ•ฉ๋‹ˆ๋‹ค.

๊ท€ํ•˜์˜ ๊ฒฝ์šฐ์— ๋ฌธ์ œ๊ฐ€ ๋ฌด์—‡์ธ์ง€ ๋งํ•˜๊ธฐ๊ฐ€ ์–ด๋ ต์Šต๋‹ˆ๋‹ค.
๋” ๋‚˜์€ ์ƒˆ๋กœ์šด ๋ฌธ์ œ๋ฅผ ์—ด๊ณ  ๋ฌธ์ œ ๋ณด๊ณ ์„œ ํ…œํ”Œ๋ฆฟ์„ ๋”ฐ๋ฅด์‹ญ์‹œ์˜ค.

๊ฒฐ๊ตญ ๋‚ด ๋ฌธ์ œ๋ฅผ ๋ฐœ๊ฒฌํ–ˆ์Šต๋‹ˆ๋‹ค. /opt/cni/bin ์—์„œ CNI์˜ loopback ๋ฐ”์ด๋„ˆ๋ฆฌ๊ฐ€ ๋ˆ„๋ฝ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

cd /opt/cni/bin
curl -L -O https://github.com/containernetworking/cni/releases/download/v0.4.0/cni-amd64-v0.4.0.tgz
tar -xf cni-amd64-v0.4.0.tgz
systemctl restart kubelet

๋‚˜๋Š” ๊ฐ™์€ ๋ฌธ์ œ๋ฅผ ๋งŒ๋‚ฌ๊ณ  ํ”Œ๋ž€๋„ฌ์„ ์„ค์น˜ํ•˜์—ฌ ํ•ด๊ฒฐํ–ˆ์Šต๋‹ˆ๋‹ค. ํ”Œ๋ž€๋„ฌ์„ ์„ค์น˜ ํ•œ ํ›„ coredns ๋ฐ ๊ธฐํƒ€ ํฌ๋“œ๋ฅผ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ์ƒ์„ฑ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

ํ”Œ๋ž€๋„ฌ์„ ์„ค์น˜ํ•˜๋ฉด ๋ฌธ์ œ๊ฐ€ ํ•ด๊ฒฐ๋˜๋Š” ์ด์œ ๋Š” ๋ฌด์—‡์ž…๋‹ˆ๊นŒ?

๋˜ํ•œ kubeadm์„ ์‚ฌ์šฉํ•˜์—ฌ K8s ํด๋Ÿฌ์Šคํ„ฐ๋ฅผ ์ดˆ๊ธฐํ™”ํ•˜๊ณ  --pod-network-cidr ์ธ์ˆ˜๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ž˜์„œ ํ”Œ๋ž€๋„ฌ์ด๋‚˜ ๋‹ค๋ฅธ ํฌ๋“œ ๋„คํŠธ์›Œํฌ ์• ๋“œ์˜จ์ด ์„ค์น˜๋˜์–ด ์žˆ์ง€ ์•Š์œผ๋ฉด kubelet์ด ํฌ๋“œ์— ip๋ฅผ ํ• ๋‹นํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์•Œ ์ˆ˜์—†๊ณ  ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•œ๋‹ค๊ณ  ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค.

journal -xeu kubelet ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค์Œ ๋ฉ”์‹œ์ง€๊ฐ€ ํ‘œ์‹œ๋œ๋‹ค๊ณ  ๋งํ–ˆ์Šต๋‹ˆ๋‹ค.

26710 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninit
Jul 09 21:54:49 beetlejuice kubelet[26710]: E0709 21:54:49.566069   26710 dns.go:131] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 213.133.99.99 213.133.98.98 213.133.100.100
Jul 09 21:54:53 beetlejuice kubelet[26710]: W0709 21:54:53.884846   26710 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d

๋‚˜๋Š” ๋˜ํ•œ ๋‚ด ์ปดํ“จํ„ฐ์—์„œ ๋น„์Šทํ•œ ๋กœ๊ทธ๋ฅผ ๋ณด์•˜์œผ๋ฏ€๋กœ ์ด๊ฒƒ์ด ๋ฌธ์ œ์˜ ์›์ธ์ด๋ผ๊ณ  ์ƒ๊ฐํ•ฉ๋‹ˆ๋‹ค.

ํ”Œ๋ž€๋„ฌ์„ ์„ค์น˜ํ•˜๋ฉด ๋ฌธ์ œ๊ฐ€ ํ•ด๊ฒฐ๋˜๋Š” ์ด์œ ๋Š” ๋ฌด์—‡์ž…๋‹ˆ๊นŒ?

CNI ํ”Œ๋Ÿฌ๊ทธ์ธ์„ ์„ค์น˜ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค.
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm
Installing a pod network add-on
You must install a pod network add-on so that your pods can communicate with each other.

CNI ํ”Œ๋Ÿฌ๊ทธ์ธ์„ ์„ค์น˜ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค.
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm

kubernetes๋ฅผ ์ดˆ๊ธฐํ™” ํ•œ ํ›„ ํฌ๋“œ ๋„คํŠธ์›Œํฌ ์• ๋“œ์˜จ์ด ์„ค์น˜๋˜์ง€ ์•Š์•˜๊ธฐ ๋•Œ๋ฌธ์— coredns๋ฅผ ์‹œ์ž‘ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ด์ œ ์•Œ๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ์‚ฌํ•ฉ๋‹ˆ๋‹ค.

์Šค์ผ€์ค„๋Ÿฌ์˜ ๋กœ๊ทธ๊ฐ€ ์—†๋‹ค๋Š” ๊ฒƒ์ด ๋„ˆ๋ฌด ๋‚˜์ฉ๋‹ˆ๋‹ค.

/var/lib/kubelet/kubeadm-flags.env์˜ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.

--resolv-conf ํ”Œ๋ž˜๊ทธ๊ฐ€ ์ถ”๊ฐ€ ๋œ ๊ฒƒ ๊ฐ™์œผ๋ฏ€๋กœ ๊ดœ์ฐฎ์Šต๋‹ˆ๋‹ค.

์ด์ œ๋Š” ๋‹จ์ผ ๋…ธ๋“œ kube๊ฐ€ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค.

kubeadm reset ์ „ํ™” ํ•œ ๋‹ค์Œ kubeadm init ... ๋‹ค์‹œ ์ „ํ™”ํ•˜์„ธ์š”.
๊ทธ๋Ÿฐ ๋‹ค์Œ ๊ตฌ์„ฑ์„ ์‚ฌ์šฉ์ž ๋””๋ ‰ํ† ๋ฆฌ์— ๋ณต์‚ฌํ•˜๊ณ  ํฌ๋“œ ๋„คํŠธ์›Œํฌ ํ”Œ๋Ÿฌ๊ทธ์ธ (weave)์„ ์„ค์น˜ํ•ด๋ณด์‹ญ์‹œ์˜ค.

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

DNS ํฌ๋“œ๊ฐ€ ์ค€๋น„๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ๋ง์ด ์•ˆ ๋  ์ˆ˜๋„ ์žˆ์ง€๋งŒ ์‹œ๋„ํ•ด๋ณด์„ธ์š”.

๊ทธ๊ฒƒ์€ ๋‚˜๋ฅผ ์œ„ํ•ด ์ž‘๋™ํ•ฉ๋‹ˆ๋‹ค! ๊ณ ๋งˆ์›Œ

์Šค์ผ€์ค„๋Ÿฌ์˜ ๋กœ๊ทธ๊ฐ€ ์—†๋‹ค๋Š” ๊ฒƒ์ด ๋„ˆ๋ฌด ๋‚˜์ฉ๋‹ˆ๋‹ค.

/var/lib/kubelet/kubeadm-flags.env์˜ ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.

--resolv-conf ํ”Œ๋ž˜๊ทธ๊ฐ€ ์ถ”๊ฐ€ ๋œ ๊ฒƒ ๊ฐ™์œผ๋ฏ€๋กœ ๊ดœ์ฐฎ์Šต๋‹ˆ๋‹ค.

์ด์ œ๋Š” ๋‹จ์ผ ๋…ธ๋“œ kube๊ฐ€ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค.

kubeadm reset ์ „ํ™” ํ•œ ๋‹ค์Œ kubeadm init ... ๋‹ค์‹œ ์ „ํ™”ํ•˜์„ธ์š”.
๊ทธ๋Ÿฐ ๋‹ค์Œ ๊ตฌ์„ฑ์„ ์‚ฌ์šฉ์ž ๋””๋ ‰ํ† ๋ฆฌ์— ๋ณต์‚ฌํ•˜๊ณ  ํฌ๋“œ ๋„คํŠธ์›Œํฌ ํ”Œ๋Ÿฌ๊ทธ์ธ (weave)์„ ์„ค์น˜ํ•ด๋ณด์‹ญ์‹œ์˜ค.

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

DNS ํฌ๋“œ๊ฐ€ ์ค€๋น„๋˜์—ˆ๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ๋ง์ด ์•ˆ ๋  ์ˆ˜๋„ ์žˆ์ง€๋งŒ ์‹œ๋„ํ•ด๋ณด์„ธ์š”.

serviceaccount / weave-net ๊ตฌ์„ฑ
clusterrole.rbac.authorization.k8s.io/weave-net ๊ตฌ์„ฑ
clusterrolebinding.rbac.authorization.k8s.io/weave-net ๊ตฌ์„ฑ
role.rbac.authorization.k8s.io/weave-net ๊ตฌ์„ฑ
rolebinding.rbac.authorization.k8s.io/weave-net ๊ตฌ์„ฑ
"extensions / v1beta1"๋ฒ„์ „์—์„œ "DaemonSet"์ข…๋ฅ˜์™€ ์ผ์น˜ํ•˜๋Š” ํ•ญ๋ชฉ์ด ์—†์Œ์„ ์ธ์‹ ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค.

๊ตฌ์„ฑ ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค.

"extensions / v1beta1"๋ฒ„์ „์—์„œ "DaemonSet"์ข…๋ฅ˜์™€ ์ผ์น˜ํ•˜๋Š” ํ•ญ๋ชฉ์ด ์—†์Œ์„ ์ธ์‹ ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค.

๊ทธ๊ฒƒ์€ CNI ํ”Œ๋Ÿฌ๊ทธ์ธ ์ธก์˜ ๋ฒ„๊ทธ์ž…๋‹ˆ๋‹ค.
๋Œ€์‹  Callico CNI ํ”Œ๋Ÿฌ๊ทธ์ธ์„ ์‚ฌ์šฉํ•ด๋ณด์‹ญ์‹œ์˜ค.

์ด ํŽ˜์ด์ง€๊ฐ€ ๋„์›€์ด ๋˜์—ˆ๋‚˜์š”?
0 / 5 - 0 ๋“ฑ๊ธ‰