Kubeadm: kubeadm을 μ‚¬μš©ν•˜μ—¬ kubernetes 1.12.0 μ΄ˆκΈ°ν™” μ‹€νŒ¨:λ…Έλ“œ "xxx"λ₯Ό 찾을 수 μ—†μŒ

에 λ§Œλ“  2018λ…„ 10μ›” 03일  Β·  45μ½”λ©˜νŠΈ  Β·  좜처: kubernetes/kubeadm

λ‚΄ ν™˜κ²½:

CentOS7 λ¦¬λˆ…μŠ€

/etc/hosts:

192.168.0.106 λ§ˆμŠ€ν„°01

192.168.0.107 λ…Έλ“œ02

192.168.0.108 λ…Έλ“œ01

master01 μ‹œμŠ€ν…œμ—μ„œ:

/etc/호슀트 이름:

λ§ˆμŠ€ν„°01

master01 μ»΄ν“¨ν„°μ—μ„œ λ‹€μŒκ³Ό 같이 λͺ…령을 μ‹€ν–‰ν•©λ‹ˆλ‹€.

1) yum install docker-ce kubelet kubeadm kubectl

2) systemctl docker.service μ‹œμž‘

3) vim /etc/sysconfig/kubelet

파일 νŽΈμ§‘:

KUBELET_EXTRA_ARGS="--fail-swap-on=false"

4) systemctl 도컀 kubelet ν™œμ„±ν™”

5)kubeadm μ΄ˆκΈ°ν™” --kubernetes-version=v1.12.0 --pod-network-cidr=10.244.0.0/16 servicecidr=10.96.0.0/12 --ignore-preflight-errors=all

κ·Έ λ‹€μŒμ—

E1002 23:32:36.072441 49157 kubelet.go:2236] λ…Έλ“œ "master01"을 찾을 수 μ—†μŒ
E1002 23:32:36.172630 49157 kubelet.go:2236] λ…Έλ“œ "master01"을 찾을 수 μ—†μŒ
E1002 23:32:36.273892 49157 kubelet.go:2236] λ…Έλ“œ "master01"을 찾을 수 μ—†μŒ
time="2018-10-02T23:32:36+08:00" level=info msg="shim docker-containerd-shim μ‹œμž‘λ¨" address="/containerd-shim/moby/52fbcdb7864cdf8039ded99b501447f19=7581a38" PID=49212
E1002 23:32:36.359984 49157 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: *v1.Node λ‚˜μ—΄ μ‹€νŒ¨: https://192.168.0.106 κ°€μ Έμ˜€κΈ°:6443/api/ v1/nodes?fieldSelector=metadata.name%3Dmaster01&limit=500&resourceVersion=0: 닀이얼 tcp 192.168.0.106:6443: μ—°κ²°: μ—°κ²° κ±°λΆ€
I1002 23:32:36.377368 49157 kubelet_node_status.go:276] λ³Όλ₯¨ 컨트둀러 μ—°κ²°/뢄리λ₯Ό ν™œμ„±ν™”ν•˜λ„λ‘ λ…Έλ“œ 주석 μ„€μ •
E1002 23:32:36.380290 49157 kubelet.go:2236] λ…Έλ“œ "master01"을 찾을 수 μ—†μŒ
E1002 23:32:36.380369 49157 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: *v1.Pod λ‚˜μ—΄ μ‹€νŒ¨: https://192.168.0.106 κ°€μ Έμ˜€κΈ°:6443/ api/v1/pods?fieldSelector=spec.nodeName%3Dmaster01&limit=500&resourceVersion=0: 닀이얼 tcp 192.168.0.106:6443: μ—°κ²°: μ—°κ²° 거뢀됨
E1002 23:32:36.380409 49157 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: *v1.Serviceλ₯Ό λ‚˜μ—΄ν•˜μ§€ λͺ»ν–ˆμŠ΅λ‹ˆλ‹€: https://192.168.0.106:6443/api/ λ₯Ό λ‚˜μ—΄ν•˜μ§€ λͺ»ν–ˆμŠ΅λ‹ˆλ‹€. v1/services?limit=500&resourceVersion=0: 닀이얼 tcp 192.168.0.106:6443: μ—°κ²°: μ—°κ²° 거뢀됨
time="2018-10-02T23:32:36+08:00" level=info msg="shim docker-containerd-shim μ‹œμž‘λ¨" address="/containerd-shim/moby/f621eca36ce85e815172c37195ae7ac94172c37195ae7ac929112" PID=49243
I1002 23:32:36.414930 49157 kubelet_node_status.go:70] λ…Έλ“œ master01 등둝 μ‹œλ„ 쀑
E1002 23:32:36.416627 49157 kubelet_node_status.go:92] API μ„œλ²„μ— "master01" λ…Έλ“œλ₯Ό 등둝할 수 μ—†μŒ: Post https://192.168.0.106:6443/api/v1/nodes : dial tcp8 connect602.106 : 연결이 κ±°λΆ€λ˜μ—ˆμŠ΅λ‹ˆλ‹€.
time="2018-10-02T23:32:36+08:00" level=info msg="shim docker-containerd-shim μ‹œμž‘λ¨" address="/containerd-shim/moby/db3f5acb415581d85aef199bea3f85430d199bea3f854302437c7" PID=49259
E1002 23:32:36.488013 49157 kubelet.go:2236] λ…Έλ“œ "master01"을 찾을 수 μ—†μŒ
time="2018-10-02T23:32:36+08:00" level=info msg="shim docker-containerd-shim μ‹œμž‘λ¨" address="/containerd-shim/moby/505110c39ed4cd5b3fd4fb8630120174fb8630120174371fa" PID=49275
E1002 23:32:36.588919 49157 kubelet.go:2236] λ…Έλ“œ "master01"을 찾을 수 μ—†μŒ
E1002 23:32:36.691338 49157 kubelet.go:2236] λ…Έλ“œ "master01"을 찾을 수 μ—†μŒ

λ‚˜λŠ” λ§Žμ€ μ‹œκ°„μ„ μ‹œλ„ν–ˆμŠ΅λ‹ˆλ‹€!

κ°€μž₯ μœ μš©ν•œ λŒ“κΈ€

Kubernetes v1.13.0κ³Ό λ™μΌν•œ 문제
μ„ΌνŠΈOS 7
docker-ce 18.06(μ΅œμ‹  검증 버전)
dockerd: ν™œμ„±, μ‹€ν–‰ 쀑
kubelet: ν™œμ„±, μ‹€ν–‰ 쀑
selinux: λΉ„ν™œμ„±ν™”
λ°©ν™”λ²½: λΉ„ν™œμ„±ν™”

였λ₯˜:
kubelet[98023]: E1212 21:10:01.708004 98023 kubelet.go:2266] node "node1" not found
/etc/hostsμ—λŠ” λ…Έλ“œκ°€ ν¬ν•¨λ˜μ–΄ 있고 ping이 κ°€λŠ₯ν•˜κ³  μ—°κ²°ν•  수 μžˆμŠ΅λ‹ˆλ‹€. μ‹€μ œλ‘œ 단일 λ§ˆμŠ€ν„° 단일 μž‘μ—…μž(예: μ˜€μ—Όλœ λ…Έλ“œ)λ₯Ό μˆ˜ν–‰ν•©λ‹ˆλ‹€.

K8SλŠ” 이 값을 μ–΄λ””μ—μ„œ μ°ΎλŠ”κ°€? /etc/hosts?
ν•„μš”ν•œ 경우 문제λ₯Ό ν•΄κ²°ν•˜κ³  μΆ”κ°€ 증거λ₯Ό μ œκ³΅ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

--> kubeadm initκ°€ μ™„λ£Œλ˜κ³  λΆ€μŠ€νŠΈλž© 토큰을 μΈμ‡„ν•©λ‹ˆκΉŒ?
κΈ΄ 였λ₯˜λ‘œ λλ‚©λ‹ˆλ‹€.

[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [10.10.128.186 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [10.10.128.186 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

μ°Έκ³ : μ œν•œ μ‹œκ°„ 이후에 μ œμ•ˆλœ λͺ…λ Ή 쀑 아무 것도 μ—¬κΈ°μ—μ„œ μ–ΈκΈ‰ν•  κ°€μΉ˜κ°€ μžˆλŠ” λ‚΄μš©μ„ λ³΄κ³ ν•˜μ§€ μ•Šμ•˜μŠ΅λ‹ˆλ‹€.

kubelet 및 kubeadm 버전?
---> 1.13.0
kubeadm 버전: &version.Info{μ£Ό:"1", λΆ€:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"201a6d", 01Z", GoVersion:"go1.11.2", 컴파일러:"gc", ν”Œλž«νΌ:"linux/amd64"}

λ˜ν•œ kube λ‘œκ·Έμ—μ„œ μ’€ 더 λͺ…ν™•ν•˜κ³  μƒμ„Έν•˜κ²Œ "λ…Έλ“œλ₯Ό 찾을 수 μ—†μŒ"보닀 더 λ‚˜μ€ 였λ₯˜ λ©”μ‹œμ§€λ₯Ό μ„€μ •ν•˜λ©΄ μ•ˆ λ˜λ‚˜μš”?

감사 ν•΄μš”

λͺ¨λ“  45 λŒ“κΈ€

첫 번째 였λ₯˜ λ©”μ‹œμ§€: ν΄λΌμ΄μ–ΈνŠΈ CA νŒŒμΌμ„ λ‘œλ“œν•  수 μ—†μŠ΅λ‹ˆλ‹€. /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: ν•΄λ‹Ή 파일 λ˜λŠ” 디렉터리가 μ—†μŠ΅λ‹ˆλ‹€.

첫 번째 였λ₯˜ λ©”μ‹œμ§€: ν΄λΌμ΄μ–ΈνŠΈ CA νŒŒμΌμ„ λ‘œλ“œν•  수 μ—†μŠ΅λ‹ˆλ‹€. /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: ν•΄λ‹Ή 파일 λ˜λŠ” 디렉터리가 μ—†μŠ΅λ‹ˆλ‹€.

μ•ˆλ…•ν•˜μ„Έμš”, 여기에 λͺ‡ 가지 질문이 μžˆμŠ΅λ‹ˆλ‹€:
1) kubeadm initκ°€ μ™„λ£Œλ˜κ³  λΆ€μŠ€νŠΈλž© 토큰을 μΈμ‡„ν•©λ‹ˆκΉŒ?
2) μ»¨ν…Œμ΄λ„ˆ λŸ°νƒ€μž„ 버전?
3) kubelet 및 kubeadm 버전 1.12μž…λ‹ˆκΉŒ?

/μš°μ„ μˆœμœ„ μš”κ΅¬-더 λ§Žμ€ 증거

kubeadm init 전에 systemctl start kubelet을 μ‹€ν–‰ν•΄μ•Ό ν•©λ‹ˆλ‹€.

컡의 μ½”μ–΄κ°€ 2보닀 μž‘κΈ° λ•Œλ¬Έμ— λ™μΌν•œ λ¬Έμ œκ°€ λ°œμƒν•©λ‹ˆλ‹€.

같은 문제

@javacppc μ–΄λ–»κ²Œ ν•΄κ²°ν•˜μ…¨λ‚˜μš”? systemctl start kubelet을 μ‹€ν–‰ν•˜λ©΄ error code

kubernetes 1.12.2와 λ™μΌν•œ λ¬Έμ œμž…λ‹ˆλ‹€.
@Javacppc μ–΄λ–»κ²Œ ν•΄κ²°ν•˜μ…¨λ‚˜μš”?

같은 문제

같은 문제

μ•ˆλ…•ν•˜μ„Έμš” μ—¬λŸ¬λΆ„,

μ—¬κΈ°μ„œ λ™μΌν•œ λ¬Έμ œμ— μ§λ©΄ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€. ν΄λŸ¬μŠ€ν„°λ₯Ό μ‹œμž‘ν•  λ•Œ ν† ν°μ—μ„œ λ©”μ‹œμ§€λ₯Ό λ°›μ•˜μ§€λ§Œ ν΄λΌμš°λ“œ μœ„λΈŒλ₯Ό μ„€μΉ˜ν•  수 μ—†μŠ΅λ‹ˆλ‹€.
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" The connection to the server 192.168.56.104:6443 was refused - did you specify the right host or port?

둜그둜 μ΄λ™ν•˜λ©΄ λ…Έλ“œ 이름에 λŒ€ν•œ λ©”μ‹œμ§€κ°€ λ‚˜νƒ€λ‚©λ‹ˆλ‹€.

Dec 02 22:27:55 kubemaster5 kubelet[2838]: E1202 22:27:55.128645 2838 kubelet.go:2236] node "kubemaster5" not found

아무도 λ‚˜μ—κ²Œ μ•½κ°„μ˜ 빛을 보낼 수 μžˆμŠ΅λ‹ˆκΉŒ?

κ°μ‚¬ν•©λ‹ˆλ‹€!

λ‚΄ λ¬Έμ œκ°€ ν•΄κ²°λ˜μ—ˆμœΌλ©° μ‹€μ œλ‘œ 버그가 μ•„λ‹™λ‹ˆλ‹€. apiserverκ°€ μ–΄λ–€ 이유둜 μ‹œμž‘ν•˜μ§€ λͺ»ν–ˆκΈ° λ•Œλ¬Έμž…λ‹ˆλ‹€.

"apiserverκ°€ μ–΄λ–€ 이유둜 μ‹œμž‘ν•˜μ§€ λͺ»ν–ˆμŠ΅λ‹ˆλ‹€"? μžμ„Ένžˆ μ•Œλ €μ£Όμ‹€ 수 μžˆλ‚˜μš”??

λ‚˜λŠ” λ©°μΉ  전에 λ‚΄ 문제λ₯Ό ν•΄κ²°ν–ˆμŠ΅λ‹ˆλ‹€. 1.11.4 -> 1.12.3μ—μ„œ μ—…λ°μ΄νŠΈν•˜μ‹­μ‹œμ˜€. λ‚˜λŠ” κ°€μ§€κ³ μžˆλ‹€ :

  1. api-server - 자체 λ„€νŠΈμ›Œν¬κ°€ μžˆλŠ” νŠΉμ • 가상 μΈν„°νŽ˜μ΄μŠ€μ—μ„œ μ‹€ν–‰λ©λ‹ˆλ‹€. ( λ² μ–΄ λ©”νƒˆ ).
    kubeadm init/join ν”Œλž˜κ·Έ apiserver-advertise-address μ—λŠ” νŠΉμ • μΈν„°νŽ˜μ΄μŠ€μ—μ„œ μ‹œμž‘λ˜μ—ˆμ§€λ§Œ μ„€μ •/μƒνƒœ 확인이 μžˆλŠ” νŒ¨ν‚€μ§€λŠ” λΌμš°νŒ… ν…Œμ΄λΈ”μ˜ ν‘œμ€€ 경둜(κΈ°λ³Έ μΈν„°νŽ˜μ΄μŠ€)λ₯Ό ν†΅κ³Όν•©λ‹ˆλ‹€. /etc/kubernetes/manifests/kube-apiserver.yaml bind-address λ§€κ°œλ³€μˆ˜λ₯Ό 가상 μΈν„°νŽ˜μ΄μŠ€μ˜ IP에 λ°”μΈλ”©ν•˜λŠ” 데 도움이 λ˜μ—ˆμŠ΅λ‹ˆλ‹€.
  2. flannel - controller , scheduler ν¬λ“œλ₯Ό μƒμ„±ν•œ ν›„ λ„€νŠΈμ›Œν¬μ™€ λ™μΌν•œ μƒν™©μž…λ‹ˆλ‹€. api μ„œλ²„ 10.96.0.1:443 clusterIP 에 λŒ€ν•œ connection refused 둜 인해 DNS 배포에 μ‹€νŒ¨ν–ˆμŠ΅λ‹ˆλ‹€. (κΈ°λ³Έ λΌμš°νŒ… ν…Œμ΄λΈ”) λ‚΄κ°€ μ§€μ •λœ node-ip ν”Œλž˜κ·Έμ— μ˜ν•΄ ν΄λŸ¬μŠ€ν„° λ…Έλ“œμ˜ --node-ip μ—μ„œ /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 가상 μΈν„°νŽ˜μ΄μŠ€μ˜ IP와.

이 ν›„ 버전 1.12.3의 μ€€λΉ„λœ λ…Έλ“œκ°€ μžˆμŠ΅λ‹ˆλ‹€. κ°€μž₯ μœ μš©ν•œ μ •λ³΄λŠ” docker logs + kubectl logs

v1.13.0κ³Ό λ™μΌν•œ 문제

Kubernetes v1.13.0κ³Ό λ™μΌν•œ 문제
μ„ΌνŠΈOS 7
docker-ce 18.06(μ΅œμ‹  검증 버전)
dockerd: ν™œμ„±, μ‹€ν–‰ 쀑
kubelet: ν™œμ„±, μ‹€ν–‰ 쀑
selinux: λΉ„ν™œμ„±ν™”
λ°©ν™”λ²½: λΉ„ν™œμ„±ν™”

였λ₯˜:
kubelet[98023]: E1212 21:10:01.708004 98023 kubelet.go:2266] node "node1" not found
/etc/hostsμ—λŠ” λ…Έλ“œκ°€ ν¬ν•¨λ˜μ–΄ 있고 ping이 κ°€λŠ₯ν•˜κ³  μ—°κ²°ν•  수 μžˆμŠ΅λ‹ˆλ‹€. μ‹€μ œλ‘œ 단일 λ§ˆμŠ€ν„° 단일 μž‘μ—…μž(예: μ˜€μ—Όλœ λ…Έλ“œ)λ₯Ό μˆ˜ν–‰ν•©λ‹ˆλ‹€.

K8SλŠ” 이 값을 μ–΄λ””μ—μ„œ μ°ΎλŠ”κ°€? /etc/hosts?
ν•„μš”ν•œ 경우 문제λ₯Ό ν•΄κ²°ν•˜κ³  μΆ”κ°€ 증거λ₯Ό μ œκ³΅ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

--> kubeadm initκ°€ μ™„λ£Œλ˜κ³  λΆ€μŠ€νŠΈλž© 토큰을 μΈμ‡„ν•©λ‹ˆκΉŒ?
κΈ΄ 였λ₯˜λ‘œ λλ‚©λ‹ˆλ‹€.

[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [10.10.128.186 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [10.10.128.186 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

μ°Έκ³ : μ œν•œ μ‹œκ°„ 이후에 μ œμ•ˆλœ λͺ…λ Ή 쀑 아무 것도 μ—¬κΈ°μ—μ„œ μ–ΈκΈ‰ν•  κ°€μΉ˜κ°€ μžˆλŠ” λ‚΄μš©μ„ λ³΄κ³ ν•˜μ§€ μ•Šμ•˜μŠ΅λ‹ˆλ‹€.

kubelet 및 kubeadm 버전?
---> 1.13.0
kubeadm 버전: &version.Info{μ£Ό:"1", λΆ€:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"201a6d", 01Z", GoVersion:"go1.11.2", 컴파일러:"gc", ν”Œλž«νΌ:"linux/amd64"}

λ˜ν•œ kube λ‘œκ·Έμ—μ„œ μ’€ 더 λͺ…ν™•ν•˜κ³  μƒμ„Έν•˜κ²Œ "λ…Έλ“œλ₯Ό 찾을 수 μ—†μŒ"보닀 더 λ‚˜μ€ 였λ₯˜ λ©”μ‹œμ§€λ₯Ό μ„€μ •ν•˜λ©΄ μ•ˆ λ˜λ‚˜μš”?

감사 ν•΄μš”

같은 문제...

$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Fri 2018-12-14 19:05:47 UTC; 2min 2s ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 9114 (kubelet)
    Tasks: 23 (limit: 4915)
   CGroup: /system.slice/kubelet.service
           └─9114 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-d

Dec 14 19:07:49 pineview kubelet[9114]: E1214 19:07:49.862262    9114 kuberuntime_manager.go:657] createPodSandbox for pod "kube-scheduler-pineview_kube-system(7f99b6875de942b000954351c4a
Dec 14 19:07:49 pineview kubelet[9114]: E1214 19:07:49.862381    9114 pod_workers.go:186] Error syncing pod 7f99b6875de942b000954351c4ac09b5 ("kube-scheduler-pineview_kube-system(7f99b687
Dec 14 19:07:49 pineview kubelet[9114]: E1214 19:07:49.906855    9114 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start san
Dec 14 19:07:49 pineview kubelet[9114]: E1214 19:07:49.906944    9114 kuberuntime_sandbox.go:65] CreatePodSandbox for pod "etcd-pineview_kube-system(b7841e48f3e7b81c3cda6872104ba3de)" fai
Dec 14 19:07:49 pineview kubelet[9114]: E1214 19:07:49.906981    9114 kuberuntime_manager.go:657] createPodSandbox for pod "etcd-pineview_kube-system(b7841e48f3e7b81c3cda6872104ba3de)" fa
Dec 14 19:07:49 pineview kubelet[9114]: E1214 19:07:49.907100    9114 pod_workers.go:186] Error syncing pod b7841e48f3e7b81c3cda6872104ba3de ("etcd-pineview_kube-system(b7841e48f3e7b81c3c
Dec 14 19:07:49 pineview kubelet[9114]: E1214 19:07:49.933627    9114 kubelet.go:2236] node "pineview" not found
Dec 14 19:07:50 pineview kubelet[9114]: E1214 19:07:50.033880    9114 kubelet.go:2236] node "pineview" not found
Dec 14 19:07:50 pineview kubelet[9114]: E1214 19:07:50.134064    9114 kubelet.go:2236] node "pineview" not found
Dec 14 19:07:50 pineview kubelet[9114]: E1214 19:07:50.184943    9114 event.go:212] Unable to write event: 'Post https://192.168.1.235:6443/api/v1/namespaces/default/events: dial tcp 192.

같은 문제:

μš°λΆ„νˆ¬ 18.04.1 LTS
Kubernetes v1.13.1(cri-o 1.11 μ‚¬μš©)

kubernetes.io의 μ„€μΉ˜ 지침을 λ”°λžμŠ΅λ‹ˆλ‹€.
https://kubernetes.io/docs/setup/independent/install-kubeadm/
https://kubernetes.io/docs/setup/cri/#cri -o

systemctl enable kubelet.service
kubeadm init --pod-network-cidr=192.168.0.0/16 --cri-socket=/var/run/crio/crio.sock

/etc/hosts

127.0.0.1       localhost
::1             localhost ip6-localhost ip6-loopback
ff02::1         ip6-allnodes
ff02::2         ip6-allrouters

127.0.1.1       master01.mydomain.tld master01
::1             master01.mydomain.tld master01

/etc/hostname


systemctl status kubelet

● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Tue 2018-12-18 16:19:54 CET; 20min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 10148 (kubelet)
    Tasks: 21 (limit: 2173)
   CGroup: /system.slice/kubelet.service
           └─10148 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --resolv-conf=/run/systemd/resolve/resolv.conf

Dec 18 16:40:52 master01 kubelet[10148]: E1218 16:40:52.795313   10148 kubelet.go:2266] node "master01" not found
Dec 18 16:40:52 master01 kubelet[10148]: E1218 16:40:52.896277   10148 kubelet.go:2266] node "master01" not found
Dec 18 16:40:52 master01 kubelet[10148]: E1218 16:40:52.997864   10148 kubelet.go:2266] node "master01" not found
Dec 18 16:40:53 master01 kubelet[10148]: E1218 16:40:53.098927   10148 kubelet.go:2266] node "master01" not found
Dec 18 16:40:53 master01 kubelet[10148]: E1218 16:40:53.200355   10148 kubelet.go:2266] node "master01" not found
Dec 18 16:40:53 master01 kubelet[10148]: E1218 16:40:53.281586   10148 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.178.27:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster01limit=500&resourceVersion=0: dial tcp 192.168.178.27:6443: connect: connection refused
Dec 18 16:40:53 master01 kubelet[10148]: E1218 16:40:53.282143   10148 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.178.27:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.178.27:6443: connect: connection refused
Dec 18 16:40:53 master01 kubelet[10148]: E1218 16:40:53.283945   10148 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://192.168.178.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster01limit=500&resourceVersion=0: dial tcp 192.168.178.27:6443: connect: connection refused
Dec 18 16:40:53 master01 kubelet[10148]: E1218 16:40:53.301468   10148 kubelet.go:2266] node "master01" not found
Dec 18 16:40:53 master01 kubelet[10148]: E1218 16:40:53.402256   10148 kubelet.go:2266] node "master01" not found

@fhemberger λ‚΄ 문제λ₯Ό μ•Œμ•„ λƒˆμŠ΅λ‹ˆλ‹€. snap λ₯Ό μ‚¬μš©ν•˜μ—¬ Dockerλ₯Ό μ„€μΉ˜ν–ˆμŠ΅λ‹ˆλ‹€. μ œκ±°ν•˜κ³  apt μ‚¬μš©ν•˜μ—¬ λ‹€μ‹œ μ„€μΉ˜ν•˜λ©΄ kubeadm이 μ œλŒ€λ‘œ μž‘λ™ν–ˆμŠ΅λ‹ˆλ‹€.

@cjbottaro λ‚˜λŠ” cri-o μ™Έμ—λŠ” Dockerλ₯Ό μ „ν˜€ μ‚¬μš©ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€.

v1.13.1κ³Ό λ™μΌν•œ 문제

systemd 및 cri-oλ₯Ό μ‚¬μš©ν•˜λŠ” 경우 /var/lib/kubelet/config.yaml μ—μ„œ cgroup λ“œλΌμ΄λ²„λ‘œ μ„€μ •ν•΄μ•Ό ν•©λ‹ˆλ‹€(λ˜λŠ” μ•„λž˜ μŠ€λ‹ˆνŽ«μ„ kubeadm init --config=config.yaml μΌλΆ€λ‘œ 전달).

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

kubelet λ‘œκ·Έμ—μ„œ 이것을 λ°œκ²¬ν•˜λ©΄:

remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = cri-o configured with systemd cgroup manager, but did not receive slice as parent: /kubepods/besteffort/…

λ‚˜λŠ” 였늘 같은 문제λ₯Ό λ§Œλ‚¬λ‹€.

rm -rf /var/lib/kubelet/을 제거 ν•˜κ³  λ‹€μ‹œ μ„€μΉ˜ν•˜μ—¬ μˆ˜μ •ν–ˆμŠ΅λ‹ˆλ‹€.

@JishanXing κ°μ‚¬ν•©λ‹ˆλ‹€! 이것은 λ˜ν•œ Raspbian Sketch liteμ—μ„œ μ‹€ν–‰ν•˜λŠ” 문제λ₯Ό ν•΄κ²°ν–ˆμŠ΅λ‹ˆλ‹€.

/etc/systemd/system/kubelet.service.d/20-etcd-service-manager.confλ₯Ό μ œκ±°ν•˜μ—¬ μˆ˜μ •ν–ˆμŠ΅λ‹ˆλ‹€.

kubeadm reset λͺ…령을 μ‚¬μš©ν•˜λŠ” 것이 μ’‹μŠ΅λ‹ˆλ‹€.

@fhemberger 그것을 ν•΄κ²°ν•˜λŠ” 방법, 같은 질문, κ°μ‚¬ν•©λ‹ˆλ‹€

k8을 1.13.3μ—μ„œ 1.13.4둜 μ—…κ·Έλ ˆμ΄λ“œν•  λ•Œλ„ 같은 λ¬Έμ œκ°€ λ°œμƒν–ˆμŠ΅λ‹ˆλ‹€...
/etc/kubernetes/manifests/kube-scheduler.yaml νŽΈμ§‘ν•œ ν›„ ν•΄κ²°ν•©λ‹ˆλ‹€. 이미지 버전 μˆ˜μ •
image: k8s.gcr.io/kube-scheduler:v1.13.3 ==> image: k8s.gcr.io/kube-scheduler:v1.13.4
kube-controller-manager.yaml 및 kube-apiserver.yamlκ³Ό λ™μΌν•©λ‹ˆλ‹€.

μ΅œμ‹  방법은 --image-repository registry.aliyuncs.com/google_containers μ˜΅μ…˜μ„ μΆ”κ°€ν•˜λŠ” κ²ƒμž…λ‹ˆλ‹€. λ‚΄ k8s 버전은 1.14.0,docker 버전: 18.09.2,
` kubeadm μ΄ˆκΈ°ν™” --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.14.0 --pod-network-cidr=192.168.0.0/16
[μ΄ˆκΈ°ν™”] Kubernetes 버전 μ‚¬μš©: v1.14.0
[μ‹€ν–‰ μ „] μ‹€ν–‰ μ „ 검사 μ‹€ν–‰
[κ²½κ³  IsDockerSystemdCheck]: "cgroupfs"λ₯Ό Docker cgroup λ“œλΌμ΄λ²„λ‘œ κ°μ§€ν–ˆμŠ΅λ‹ˆλ‹€. ꢌμž₯ λ“œλΌμ΄λ²„λŠ” "systemd"μž…λ‹ˆλ‹€. https://kubernetes.io/docs/setup/cri/ 의 κ°€μ΄λ“œλ₯Ό λ”°λ₯΄μ‹­μ‹œμ˜€.
[ν”„λ¦¬ν”ŒλΌμ΄νŠΈ] Kubernetes ν΄λŸ¬μŠ€ν„° 섀정에 ν•„μš”ν•œ 이미지 κ°€μ Έμ˜€κΈ°
[ν”„λ¦¬ν”ŒλΌμ΄νŠΈ] 인터넷 μ—°κ²° 속도에 따라 1~2λΆ„ 정도 μ†Œμš”λ  수 μžˆμŠ΅λ‹ˆλ‹€.
[ν”„λ¦¬ν”ŒλΌμ΄νŠΈ] 'kubeadm config images pull'을 μ‚¬μš©ν•˜μ—¬ 미리 이 μž‘μ—…μ„ μˆ˜ν–‰ν•  μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€.
[kubelet-start] "/var/lib/kubelet/kubeadm-flags.env" νŒŒμΌμ— ν”Œλž˜κ·Έκ°€ μžˆλŠ” kubelet ν™˜κ²½ 파일 μ“°κΈ°
[kubelet-start] "/var/lib/kubelet/config.yaml" νŒŒμΌμ— kubelet ꡬ성 μ“°κΈ°
[kubelet-start] kubelet μ„œλΉ„μŠ€ ν™œμ„±ν™”
[certs] certificateDir 폴더 "/etc/kubernetes/pki" μ‚¬μš©
[certs] "front-proxy-ca" μΈμ¦μ„œ 및 ν‚€ 생성
[certs] "front-proxy-client" μΈμ¦μ„œ 및 ν‚€ 생성
[certs] "ca" μΈμ¦μ„œ 및 ν‚€ 생성
[certs] "apiserver" μΈμ¦μ„œ 및 ν‚€ 생성
[μΈμ¦μ„œ] apiserver μ„œλΉ™ μΈμ¦μ„œλŠ” DNS 이름 [jin-virtual-machine kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] 및 IP [10.96.0.1 192.168.232.130]에 λŒ€ν•΄ μ„œλͺ…λ˜μ—ˆμŠ΅λ‹ˆλ‹€.
[certs] "apiserver-kubelet-client" μΈμ¦μ„œ 및 ν‚€ 생성
[certs] "etcd/ca" μΈμ¦μ„œ 및 ν‚€ 생성
[certs] "etcd/server" μΈμ¦μ„œ 및 ν‚€ 생성
[μΈμ¦μ„œ] etcd/server μ„œλΉ™ μΈμ¦μ„œλŠ” DNS 이름 [jin-virtual-machine localhost] 및 IP [192.168.232.130 127.0.0.1 ::1]에 λŒ€ν•΄ μ„œλͺ…λ˜μ—ˆμŠ΅λ‹ˆλ‹€.
[certs] "etcd/peer" μΈμ¦μ„œ 및 ν‚€ 생성
[μΈμ¦μ„œ] etcd/ν”Όμ–΄ μ„œλΉ„μŠ€ μΈμ¦μ„œλŠ” DNS 이름 [jin-virtual-machine localhost] 및 IP [192.168.232.130 127.0.0.1 ::1]에 λŒ€ν•΄ μ„œλͺ…λ˜μ—ˆμŠ΅λ‹ˆλ‹€.
[certs] "apiserver-etcd-client" μΈμ¦μ„œ 및 ν‚€ 생성
[certs] "etcd/healthcheck-client" μΈμ¦μ„œ 및 ν‚€ 생성
[certs] "sa" ν‚€ 및 곡개 ν‚€ 생성
[kubeconfig] kubeconfig 폴더 "/etc/kubernetes" μ‚¬μš©
[kubeconfig] "admin.conf" kubeconfig 파일 μž‘μ„±
[kubeconfig] "kubelet.conf" kubeconfig 파일 μž‘μ„±
[kubeconfig] "controller-manager.conf" kubeconfig 파일 μž‘μ„±
[kubeconfig] "scheduler.conf" kubeconfig 파일 μž‘μ„±
[μ œμ–΄ 평면] λ§€λ‹ˆνŽ˜μŠ€νŠΈ 폴더 "/etc/kubernetes/manifests" μ‚¬μš©
[μ œμ–΄ 평면] "kube-apiserver"에 λŒ€ν•œ 정적 ν¬λ“œ λ§€λ‹ˆνŽ˜μŠ€νŠΈ 생성
[μ œμ–΄ 평면] "kube-controller-manager"에 λŒ€ν•œ 정적 Pod λ§€λ‹ˆνŽ˜μŠ€νŠΈ λ§Œλ“€κΈ°
[μ œμ–΄ 평면] "kube-scheduler"에 λŒ€ν•œ 정적 ν¬λ“œ λ§€λ‹ˆνŽ˜μŠ€νŠΈ λ§Œλ“€κΈ°
[etcd] "/etc/kubernetes/manifests"μ—μ„œ 둜컬 etcd에 λŒ€ν•œ 정적 Pod λ§€λ‹ˆνŽ˜μŠ€νŠΈ λ§Œλ“€κΈ°
[wait-control-plane] kubelet이 "/etc/kubernetes/manifests" λ””λ ‰ν† λ¦¬μ—μ„œ 정적 νŒŒλ“œλ‘œ 컨트둀 ν”Œλ ˆμΈμ„ λΆ€νŒ…ν•˜κΈ°λ₯Ό κΈ°λ‹€λ¦½λ‹ˆλ‹€. μ΅œλŒ€ 4m0μ΄ˆκ°€ μ†Œμš”λ  수 μžˆμŠ΅λ‹ˆλ‹€.
[apiclient] λͺ¨λ“  μ œμ–΄ 평면 ꡬ성 μš”μ†ŒλŠ” 17.004356초 후에 μ •μƒμž…λ‹ˆλ‹€.
[upload-config] ConfigMap "kubeadm-config"μ—μ„œ μ‚¬μš©λœ ꡬ성을 "kube-system" λ„€μž„μŠ€νŽ˜μ΄μŠ€μ— μ €μž₯
[kubelet] ν΄λŸ¬μŠ€ν„°μ˜ kubelet에 λŒ€ν•œ ꡬ성을 μ‚¬μš©ν•˜μ—¬ λ„€μž„μŠ€νŽ˜μ΄μŠ€ kube-system에 ConfigMap "kubelet-config-1.14" 생성
[upload-certs] κ±΄λ„ˆλ›°λŠ” λ‹¨κ³„μž…λ‹ˆλ‹€. --experimental-upload-certsλ₯Ό μ°Έμ‘°ν•˜μ‹­μ‹œμ˜€.
[mark-control-plane] "node-role.kubernetes.io/master=''" λ ˆμ΄λΈ”μ„ μΆ”κ°€ν•˜μ—¬ λ…Έλ“œ jin-virtual-machine을 μ œμ–΄ ν‰λ©΄μœΌλ‘œ ν‘œμ‹œ
[mark-control-plane] taintsλ₯Ό μΆ”κ°€ν•˜μ—¬ λ…Έλ“œ jin-virtual-machine을 μ œμ–΄ ν‰λ©΄μœΌλ‘œ ν‘œμ‹œ [node-role.kubernetes.io/ master:NoSchedule ]
[bootstrap-token] 토큰 μ‚¬μš©: xucir0.o4kzo3qqjyjnzphl
[bootstrap-token] λΆ€νŠΈμŠ€νŠΈλž© 토큰, ν΄λŸ¬μŠ€ν„° 정보 ConfigMap, RBAC μ—­ν•  ꡬ성
[bootstrap-token] λ…Έλ“œκ°€ μž₯κΈ° μΈμ¦μ„œ 자격 증λͺ…을 μ–»κΈ° μœ„ν•΄ λ…Έλ“œ λΆ€νŠΈμŠ€νŠΈλž© 토큰이 CSR을 κ²Œμ‹œν•  수 μžˆλ„λ‘ RBAC κ·œμΉ™μ„ κ΅¬μ„±ν–ˆμŠ΅λ‹ˆλ‹€.
[bootstrap-token] csrapprover μ»¨νŠΈλ‘€λŸ¬κ°€ λ…Έλ“œ λΆ€νŠΈμŠ€νŠΈλž© ν† ν°μ—μ„œ CSR을 μžλ™μœΌλ‘œ μŠΉμΈν•˜λ„λ‘ RBAC κ·œμΉ™μ„ κ΅¬μ„±ν–ˆμŠ΅λ‹ˆλ‹€.
[bootstrap-token] ν΄λŸ¬μŠ€ν„°μ˜ λͺ¨λ“  λ…Έλ“œ ν΄λΌμ΄μ–ΈνŠΈ μΈμ¦μ„œμ— λŒ€ν•œ μΈμ¦μ„œ ꡐ체λ₯Ό ν—ˆμš©ν•˜λ„λ‘ κ΅¬μ„±λœ RBAC κ·œμΉ™
[bootstrap-token] "kube-public" λ„€μž„μŠ€νŽ˜μ΄μŠ€μ— "cluster-info" ConfigMap 생성
[μ• λ“œμ˜¨] 적용 ν•„μˆ˜ μ• λ“œμ˜¨: CoreDNS
[μ• λ“œμ˜¨] 적용 ν•„μˆ˜ μ• λ“œμ˜¨: kube-proxy

Kubernetes μ œμ–΄ μ˜μ—­μ΄ μ„±κ³΅μ μœΌλ‘œ μ΄ˆκΈ°ν™”λ˜μ—ˆμŠ΅λ‹ˆλ‹€!

ν΄λŸ¬μŠ€ν„° μ‚¬μš©μ„ μ‹œμž‘ν•˜λ €λ©΄ 일반 μ‚¬μš©μžλ‘œ λ‹€μŒμ„ μ‹€ν–‰ν•΄μ•Ό ν•©λ‹ˆλ‹€.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

이제 ν΄λŸ¬μŠ€ν„°μ— ν¬λ“œ λ„€νŠΈμ›Œν¬λ₯Ό 배포해야 ν•©λ‹ˆλ‹€.
λ‹€μŒμ— λ‚˜μ—΄λœ μ˜΅μ…˜ 쀑 ν•˜λ‚˜λ₯Ό μ‚¬μš©ν•˜μ—¬ "kubectl apply -f [podnetwork].yaml"을 μ‹€ν–‰ν•©λ‹ˆλ‹€.
https://kubernetes.io/docs/concepts/cluster-administration/addons/

그런 λ‹€μŒ 각 μž‘μ—…μž λ…Έλ“œμ—μ„œ 루트둜 λ‹€μŒμ„ μ‹€ν–‰ν•˜μ—¬ μ›ν•˜λŠ” 수의 μž‘μ—…μž λ…Έλ“œμ— 쑰인할 수 μžˆμŠ΅λ‹ˆλ‹€.

kubeadm 쑰인 192.168.232.130:6443 --token xucir0.o4kzo3qqjyjnzphl
--discovery-token-ca-cert-hash sha256:022048b22926a2cb2f8295ce2e3f1f6fa7ffe1098bc116f7d304a26bcb78656
`

GCP Ubuntu 18.04 VMμ—μ„œ kubernetes v1.14.1 및 cri-o v1.14.0κ³Ό λ™μΌν•œ λ¬Έμ œκ°€ λ°œμƒν–ˆμŠ΅λ‹ˆλ‹€. 도컀λ₯Ό μ‚¬μš©ν•  λ•Œ λ¬Έμ œκ°€ ν•΄κ²°λ˜μ—ˆμŠ΅λ‹ˆλ‹€. μ°Έμ‘°: https://github.com/cri-o/cri-o/issues/2357

λ‚΄ λ¬Έμ œλŠ” λ‹€λ₯Έ cgroup λ“œλΌμ΄λ²„μ— μžˆμ—ˆμŠ΅λ‹ˆλ‹€. CRIOλŠ” 기본적으둜 systemdλ₯Ό μ‚¬μš©ν•˜κ³  kubelet은 기본적으둜 cgroupfsλ₯Ό μ‚¬μš©ν•©λ‹ˆλ‹€.

cat /etc/crio/crio.conf | grep cgroup
# cgroup_manager is the cgroup management implementation to be used
cgroup_manager = "systemd"

이 경우 https://kubernetes.io/docs/setup/independent/install-kubeadm/#configure -cgroup-driver-used-by-kubelet-on-master-nodeλ₯Ό μ°Έμ‘°ν•˜μ‹­μ‹œμ˜€.

파일만 μ“°μ„Έμš”

echo "KUBELET_EXTRA_ARGS=--cgroup-driver=systemd" > /etc/default/kubelet

그런 λ‹€μŒ kubeadm initλ₯Ό μ‹€ν–‰ν•˜μ‹­μ‹œμ˜€. λ˜λŠ” cgroup_managerλ₯Ό cgroupfs

docker와 달리 cri-o 및 containerdλŠ” cgroup λ“œλΌμ΄λ²„ 감지 μΈ‘λ©΄μ—μ„œ κ΄€λ¦¬ν•˜κΈ°κ°€ μ•½κ°„ 더 κΉŒλ‹€λ‘­μ§€λ§Œ kubeadmμ—μ„œ 이λ₯Ό 지원할 κ³„νšμ΄ μžˆμŠ΅λ‹ˆλ‹€.

λ„μ»€λŠ” 이미 μ²˜λ¦¬λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

λ”°λΌμ„œ λΆ„λͺ…νžˆ 해결책은 μ—†μ§€λ§Œ ν΄λŸ¬μŠ€ν„° $(yes | kubeadm reset)λ₯Ό μž¬μ„€μ •ν•˜λŠ” 것 μ™Έμ—λŠ” 제 μƒκ°μ—λŠ” 해결책이 μ•„λ‹™λ‹ˆλ‹€!

이미지 리포지토리λ₯Ό λ³€κ²½ν•˜λ©΄ μ €μ—κ²Œ νš¨κ³Όμ μ΄μ§€λ§Œ 이것은 쒋은 해결책이 μ•„λ‹™λ‹ˆλ‹€.
--image-repository registry.aliyuncs.com/google_containers

λ‚΄ κ²½μš°λŠ” μ΄κ²ƒμœΌλ‘œ μž‘λ™ν–ˆμŠ΅λ‹ˆλ‹€.

sed -i '/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

λ‚˜λŠ” 같은 λ¬Έμ œκ°€ μžˆμŠ΅λ‹ˆλ‹€. λ‚˜λŠ” kubeadm init --config=init-config.yaml ν•˜κ³  μ‹€νŒ¨ν–ˆκ³ , 이 νŒŒμΌμ€ kubeadm에 μ˜ν•΄ μƒμ„±λ˜μ—ˆμŠ΅λ‹ˆλ‹€. advertiseAddress 기본값은 file 에 1.2.3.4 ν•„λ“œκ°€ 있으며, 이둜 인해 etcd μ»¨ν…Œμ΄λ„ˆ μ‹œμž‘μ΄ μ‹€νŒ¨ν•©λ‹ˆλ‹€. 127.0.0.1둜 λ³€κ²½ν•˜λ©΄ etcd contianerκ°€ μ„±κ³΅μ μœΌλ‘œ μ‹œμž‘λ˜κ³  kubeadm init success

이 문제λ₯Ό ν•΄κ²°ν•˜λ €λ©΄ docker ps -a list all container checkλ₯Ό μ‚¬μš©ν•˜μ—¬ 일뢀가 μ’…λ£Œλ˜λ©΄ docker logs CONTIANER_ID μ‚¬μš©ν•˜μ—¬ 무슨 일이 μΌμ–΄λ‚¬λŠ”μ§€ ν™•μΈν•˜μ‹­μ‹œμ˜€. 도움이 되기λ₯Ό λ°”λžλ‹ˆλ‹€

μ—¬λŸ¬λΆ„, ν•΄κ²° 방법 μžˆμœΌμ‹  λΆ„ κ³„μ‹ κ°€μš”? μ—¬κΈ°μ„œλ„ 같은 λ¬Έμ œμ§€λ§Œ k3sλ₯Ό μ‚¬μš©ν•©λ‹ˆλ‹€.

@MateusMac k3에 λŒ€ν•œ 버그 λ³΄κ³ μ„œλ„ μ—΄μ–΄μ•Ό ν•©λ‹ˆλ‹€.

kubeadm λ₯Ό μ–»κΈ° μœ„ν•΄ 일주일 λ™μ•ˆ μΌν–ˆμŠ΅λ‹ˆλ‹€.
μš°λΆ„νˆ¬ 18.04
도컀 18.06-2-ce
k8s 1.15.1
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

Fails with:

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster

kubelet λ‘œκ·ΈλŠ” 1루λ₯Ό μ§€λ‚˜κ°ˆ λ…Έλ“œλ₯Ό 찾을 수 μ—†λ‹€λŠ” 것을 λ³΄μ—¬μ€λ‹ˆλ‹€.

warproot<strong i="15">@warp02</strong>:~$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Sun 2019-08-04 18:22:26 AEST; 5min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 12569 (kubelet)
    Tasks: 27 (limit: 9830)
   CGroup: /system.slice/kubelet.service
           └─12569 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-dri

Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.322762   12569 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "kube-scheduler-warp02_kube-system(ecae9d12d3610192347be3d1aa5aa552)"
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.322806   12569 kuberuntime_manager.go:692] createPodSandbox for pod "kube-scheduler-warp02_kube-system(ecae9d12d3610192347be3d1aa5aa552)
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.322872   12569 pod_workers.go:190] Error syncing pod ecae9d12d3610192347be3d1aa5aa552 ("kube-scheduler-warp02_kube-system(ecae9d12d36101
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.373094   12569 kubelet.go:2248] node "warp02" not found
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.375587   12569 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https://10.1.1.4:6443
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.473295   12569 kubelet.go:2248] node "warp02" not found
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.573567   12569 kubelet.go:2248] node "warp02" not found
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.575495   12569 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://10.1.1.4:6
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.590886   12569 event.go:249] Unable to write event: 'Post https://10.1.1.4:6443/api/v1/namespaces/default/events: dial tcp 10.1.1.4:6443
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.673767   12569 kubelet.go:2248] node "warp02" not found




μ΄λŸ¬ν•œ λ² μ–΄ λ©”νƒˆ μ‹œμŠ€ν…œμ— μ—¬λŸ¬ NICκ°€ μžˆλ‹€λŠ” 점에 μœ μ˜ν•΄μ•Ό ν•©λ‹ˆλ‹€.

warproot<strong i="6">@warp02</strong>:~$ ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:feff:fe65:37f  prefixlen 64  scopeid 0x20<link>
        ether 02:42:fe:65:03:7f  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6  bytes 516 (516.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp35s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.2  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::32b5:c2ff:fe02:410b  prefixlen 64  scopeid 0x20<link>
        ether 30:b5:c2:02:41:0b  txqueuelen 1000  (Ethernet)
        RX packets 46  bytes 5821 (5.8 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 70  bytes 7946 (7.9 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp6s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.1.1.4  netmask 255.255.255.0  broadcast 10.1.1.255
        inet6 fd42:59ff:1166:0:25a7:3617:fee6:424e  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::1a03:73ff:fe44:5694  prefixlen 64  scopeid 0x20<link>
        inet6 fd9e:fdd6:9e01:0:1a03:73ff:fe44:5694  prefixlen 64  scopeid 0x0<global>
        ether 18:03:73:44:56:94  txqueuelen 1000  (Ethernet)
        RX packets 911294  bytes 1361047672 (1.3 GB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 428759  bytes 29198065 (29.1 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 17  

ib0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 4092
        unspec A0-00-02-10-FE-80-00-00-00-00-00-00-00-00-00-00  txqueuelen 256  (UNSPEC)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ib1: flags=4099<UP,BROADCAST,MULTICAST>  mtu 4092
        unspec A0-00-02-20-FE-80-00-00-00-00-00-00-00-00-00-00  txqueuelen 256  (UNSPEC)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 25473  bytes 1334779 (1.3 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 25473  bytes 1334779 (1.3 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


그게 λ¬Έμ œμΈμ§€ λͺ¨λ₯΄κ² μ§€λ§Œ /etc/hosts νŒŒμΌμ„ λ‹€μŒκ³Ό 같이 μ„€μ •ν–ˆμŠ΅λ‹ˆλ‹€.

warproot<strong i="7">@warp02</strong>:~$ cat /etc/hosts
127.0.0.1       localhost.localdomain   localhost
::1             localhost6.localdomain6 localhost6
# add our host name
10.1.1.4 warp02 warp02.ad.xxx.com
# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
# add our ipv6 host name
fd42:59ff:1166:0:25a7:3617:fee6:424e warp02 warp02.ad.xxx.com

warproot<strong i="8">@warp02</strong>:~$ 

λ”°λΌμ„œ NIC 10.1.1.4λ₯Ό k8s의 "λ„€νŠΈμ›Œν¬"둜 λ³΄λŠ” 것이 μ„€μ •(제 μƒκ°μ—λŠ”)μž…λ‹ˆλ‹€.

node-name에 λŒ€ν•œ nslookup이 μ œλŒ€λ‘œ μž‘λ™ν•˜λŠ” 것 κ°™μŠ΅λ‹ˆλ‹€.

warproot<strong i="13">@warp02</strong>:~$ nslookup warp02
Server:         127.0.0.53
Address:        127.0.0.53#53

Non-authoritative answer:
Name:   warp02.ad.xxx.com
Address: 10.1.1.4
Name:   warp02.ad.xxx.com
Address: fd42:59ff:1166:0:25a7:3617:fee6:424e

warproot<strong i="14">@warp02</strong>:~$ 

kubeadm μ„€μΉ˜ λ¬Έμ„œλ₯Ό μ—¬λŸ¬ 번 μ‚΄νŽ΄λ³΄μ•˜μŠ΅λ‹ˆλ‹€.

기이 ν•œ. λ„€νŠΈμ›Œν¬λ₯Ό 찾지 λͺ»ν•  λΏμž…λ‹ˆλ‹€.

λ‹Ήν™©.

1.15.3 λ²„μ „μ˜ 경우 λ‹€μŒμ„ μΆ”κ°€ν•˜μ—¬ Ubuntu 18.04μ—μ„œ 이 문제λ₯Ό ν•΄κ²°ν•  수 μžˆμ—ˆμŠ΅λ‹ˆλ‹€.

kind: InitConfiguration
nodeRegistration:
  kubeletExtraArgs:
    cgroup-driver: "systemd"

λ‚΄ kubeadm κ΅¬μ„±μœΌλ‘œ μ΄λ™ν•œ λ‹€μŒ kubeadm init

Ubuntu 18.04μ—μ„œ 버전 1.15.3κ³Ό λ™μΌν•œ λ¬Έμ œκ°€ μžˆμŠ΅λ‹ˆλ‹€.
@kris-nova 이 ꡬ성 파일의 μœ„μΉ˜λ₯Ό ​​지정해 μ£Όμ‹œλ©΄ 정말 κ°μ‚¬ν•˜κ² μŠ΅λ‹ˆλ‹€ :-)

μ—…λ°μ΄νŠΈ: μ΄μœ λŠ” μ•Œ 수 μ—†μ§€λ§Œ μ§€κΈˆμ€ ꡬ성을 λ³€κ²½ν•˜μ§€ μ•Šκ³ λ„ μž‘λ™ν•©λ‹ˆλ‹€!
(μ°Έκ³ : 관련이 μžˆλŠ”μ§€λŠ” λͺ¨λ₯΄κ² μ§€λ§Œ kubeadm init λ₯Ό λ‹€μ‹œ μ‹œλ„ν•˜κΈ° 전에 v.19.03.1μ—μ„œ v.19.03.2둜 dockerλ₯Ό μ—…λ°μ΄νŠΈν–ˆμŠ΅λ‹ˆλ‹€.)

kubeadm init 즉 nodexxλ₯Ό 찾을 수 μ—†μŒμ„ μ‹€ν–‰ν•˜λŠ” λ™μ•ˆ μ•„λž˜ 였λ₯˜κ°€ λ°œμƒν–ˆμŠ΅λ‹ˆλ‹€.

[ root@node01 ~]# journalctl -xeu kubelet
Nov 07 10:34:02 node01 kubelet[2968]: E1107 10:34:02.682095 2968 kubelet.go:2267] λ…Έλ“œ "node01"을 찾을 수 μ—†μŒ
Nov 07 10:34:02 node01 kubelet[2968]: E1107 10:34:02.782554 2968 kubelet.go:2267] λ…Έλ“œ "node01"을 찾을 수 μ—†μŒ
Nov 07 10:34:02 node01 kubelet[2968]: E1107 10:34:02.829142 2968 reflector.go:123] k8s.io/client-go/informers/factory.go:134: *v1beta1.CSIDλ₯Ό λ‚˜μ—΄ν•˜μ§€ λͺ»ν–ˆμŠ΅λ‹ˆλ‹€.
Nov 07 10:34:02 node01 kubelet[2968]: E1107 10:34:02.884058 2968 kubelet.go:2267] λ…Έλ“œ "node01"을 찾을 수 μ—†μŒ
Nov 07 10:34:02 node01 kubelet[2968]: E1107 10:34:02.984510 2968 kubelet.go:2267] λ…Έλ“œ "node01"을 찾을 수 μ—†μŒ
Nov 07 10:34:03 node01 kubelet[2968]: E1107 10:34:03.030884 2968 reflector.go:123]

ν•΄κ²° 방법:

μ„Έν…ν¬μŠ€ 0

sed -i --follow-symlinks '/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

같은 문제

제 κ²½μš°μ—λŠ” λ§ˆμŠ€ν„° λ…Έλ“œμ˜ μ‹œκ°„ λ“œλ¦¬ν”„νŠΈλ‘œ 인해 _전원이 μ°¨λ‹¨λœ ν›„ λ°œμƒν–ˆμŠ΅λ‹ˆλ‹€.
λ‚˜λŠ” 그것을 μ‹€ν–‰ν•˜μ—¬ κ³ μ³€λ‹€.

# Correcting the time as mentioned here https://askubuntu.com/a/254846/861548
sudo service ntp stop
sudo ntpdate -s time.nist.gov
sudo service ntp start
# Then restarting the kubelet
sudo systemctl restart kubelet.service
# I also had to run daemon-reload as I got the following warning
# Warning: The unit file, source configuration file or drop-ins of kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units.
sudo systemctl daemon-reload
# I also made another restart, which I don't know whether needed or not
sudo systemctl restart kubelet.service

λ™μΌν•œ 문제λ₯Ό node "xxxx" not found ν–ˆμŠ΅λ‹ˆλ‹€. μ»¨ν…Œμ΄λ„ˆ λ‘œκ·Έκ°€ docker logs container_id λ₯Ό μ‚¬μš©ν•˜λŠ”μ§€ ν™•μΈν•œ λ‹€μŒ apiserverκ°€ 127.0.0.1:2379 연결을 μ‹œλ„ν•˜κ³  파일 νŽΈμ§‘ Β· /etc/kubernetes/manifests/etcd.yaml , λ‹€μ‹œ μ‹œμž‘, 문제 ν•΄κ²° 。

이 νŽ˜μ΄μ§€κ°€ 도움이 λ˜μ—ˆλ‚˜μš”?
0 / 5 - 0 λ“±κΈ‰