λ΄ νκ²½:
CentOS7 리λ μ€
/etc/hosts:
192.168.0.106 λ§μ€ν°01
192.168.0.107 λ Έλ02
192.168.0.108 λ Έλ01
master01 μμ€ν μμ:
/etc/νΈμ€νΈ μ΄λ¦:
λ§μ€ν°01
master01 μ»΄ν¨ν°μμ λ€μκ³Ό κ°μ΄ λͺ λ Ήμ μ€νν©λλ€.
1) yum install docker-ce kubelet kubeadm kubectl
2) systemctl docker.service μμ
3) vim /etc/sysconfig/kubelet
νμΌ νΈμ§:
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
4) systemctl λ컀 kubelet νμ±ν
5)kubeadm μ΄κΈ°ν --kubernetes-version=v1.12.0 --pod-network-cidr=10.244.0.0/16 servicecidr=10.96.0.0/12 --ignore-preflight-errors=all
κ·Έ λ€μμ
E1002 23:32:36.072441 49157 kubelet.go:2236] λ
Έλ "master01"μ μ°Ύμ μ μμ
E1002 23:32:36.172630 49157 kubelet.go:2236] λ
Έλ "master01"μ μ°Ύμ μ μμ
E1002 23:32:36.273892 49157 kubelet.go:2236] λ
Έλ "master01"μ μ°Ύμ μ μμ
time="2018-10-02T23:32:36+08:00" level=info msg="shim docker-containerd-shim μμλ¨" address="/containerd-shim/moby/52fbcdb7864cdf8039ded99b501447f19=7581a38" PID=49212
E1002 23:32:36.359984 49157 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: *v1.Node λμ΄ μ€ν¨: https://192.168.0.106 κ°μ Έμ€κΈ°:6443/api/ v1/nodes?fieldSelector=metadata.name%3Dmaster01&limit=500&resourceVersion=0: λ€μ΄μΌ tcp 192.168.0.106:6443: μ°κ²°: μ°κ²° κ±°λΆ
I1002 23:32:36.377368 49157 kubelet_node_status.go:276] λ³Όλ₯¨ 컨νΈλ‘€λ¬ μ°κ²°/λΆλ¦¬λ₯Ό νμ±ννλλ‘ λ
Έλ μ£Όμ μ€μ
E1002 23:32:36.380290 49157 kubelet.go:2236] λ
Έλ "master01"μ μ°Ύμ μ μμ
E1002 23:32:36.380369 49157 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: *v1.Pod λμ΄ μ€ν¨: https://192.168.0.106 κ°μ Έμ€κΈ°:6443/ api/v1/pods?fieldSelector=spec.nodeName%3Dmaster01&limit=500&resourceVersion=0: λ€μ΄μΌ tcp 192.168.0.106:6443: μ°κ²°: μ°κ²° κ±°λΆλ¨
E1002 23:32:36.380409 49157 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: *v1.Serviceλ₯Ό λμ΄νμ§ λͺ»νμ΅λλ€: https://192.168.0.106:6443/api/ λ₯Ό λμ΄νμ§ λͺ»νμ΅λλ€. v1/services?limit=500&resourceVersion=0: λ€μ΄μΌ tcp 192.168.0.106:6443: μ°κ²°: μ°κ²° κ±°λΆλ¨
time="2018-10-02T23:32:36+08:00" level=info msg="shim docker-containerd-shim μμλ¨" address="/containerd-shim/moby/f621eca36ce85e815172c37195ae7ac94172c37195ae7ac929112" PID=49243
I1002 23:32:36.414930 49157 kubelet_node_status.go:70] λ
Έλ master01 λ±λ‘ μλ μ€
E1002 23:32:36.416627 49157 kubelet_node_status.go:92] API μλ²μ "master01" λ
Έλλ₯Ό λ±λ‘ν μ μμ: Post https://192.168.0.106:6443/api/v1/nodes : dial tcp8 connect602.106 : μ°κ²°μ΄ κ±°λΆλμμ΅λλ€.
time="2018-10-02T23:32:36+08:00" level=info msg="shim docker-containerd-shim μμλ¨" address="/containerd-shim/moby/db3f5acb415581d85aef199bea3f85430d199bea3f854302437c7" PID=49259
E1002 23:32:36.488013 49157 kubelet.go:2236] λ
Έλ "master01"μ μ°Ύμ μ μμ
time="2018-10-02T23:32:36+08:00" level=info msg="shim docker-containerd-shim μμλ¨" address="/containerd-shim/moby/505110c39ed4cd5b3fd4fb8630120174fb8630120174371fa" PID=49275
E1002 23:32:36.588919 49157 kubelet.go:2236] λ
Έλ "master01"μ μ°Ύμ μ μμ
E1002 23:32:36.691338 49157 kubelet.go:2236] λ
Έλ "master01"μ μ°Ύμ μ μμ
λλ λ§μ μκ°μ μλνμ΅λλ€!
첫 λ²μ§Έ μ€λ₯ λ©μμ§: ν΄λΌμ΄μΈνΈ CA νμΌμ λ‘λν μ μμ΅λλ€. /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: ν΄λΉ νμΌ λλ λλ ν°λ¦¬κ° μμ΅λλ€.
첫 λ²μ§Έ μ€λ₯ λ©μμ§: ν΄λΌμ΄μΈνΈ CA νμΌμ λ‘λν μ μμ΅λλ€. /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: ν΄λΉ νμΌ λλ λλ ν°λ¦¬κ° μμ΅λλ€.
μλ
νμΈμ, μ¬κΈ°μ λͺ κ°μ§ μ§λ¬Έμ΄ μμ΅λλ€:
1) kubeadm initκ° μλ£λκ³ λΆμ€νΈλ© ν ν°μ μΈμν©λκΉ?
2) 컨ν
μ΄λ λ°νμ λ²μ ?
3) kubelet λ° kubeadm λ²μ 1.12μ
λκΉ?
/μ°μ μμ μꡬ-λ λ§μ μ¦κ±°
kubeadm init μ μ systemctl start kubeletμ μ€νν΄μΌ ν©λλ€.
μ»΅μ μ½μ΄κ° 2λ³΄λ€ μκΈ° λλ¬Έμ λμΌν λ¬Έμ κ° λ°μν©λλ€.
κ°μ λ¬Έμ
@javacppc μ΄λ»κ² ν΄κ²°νμ
¨λμ? systemctl start kubeletμ μ€ννλ©΄ error code
kubernetes 1.12.2μ λμΌν λ¬Έμ μ
λλ€.
@Javacppc μ΄λ»κ² ν΄κ²°νμ
¨λμ?
κ°μ λ¬Έμ
κ°μ λ¬Έμ
μλ νμΈμ μ¬λ¬λΆ,
μ¬κΈ°μ λμΌν λ¬Έμ μ μ§λ©΄νκ³ μμ΅λλ€. ν΄λ¬μ€ν°λ₯Ό μμν λ ν ν°μμ λ©μμ§λ₯Ό λ°μμ§λ§ ν΄λΌμ°λ μλΈλ₯Ό μ€μΉν μ μμ΅λλ€.
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
The connection to the server 192.168.56.104:6443 was refused - did you specify the right host or port?
λ‘κ·Έλ‘ μ΄λνλ©΄ λ Έλ μ΄λ¦μ λν λ©μμ§κ° λνλ©λλ€.
Dec 02 22:27:55 kubemaster5 kubelet[2838]: E1202 22:27:55.128645 2838 kubelet.go:2236] node "kubemaster5" not found
μ무λ λμκ² μ½κ°μ λΉμ λ³΄λΌ μ μμ΅λκΉ?
κ°μ¬ν©λλ€!
λ΄ λ¬Έμ κ° ν΄κ²°λμμΌλ©° μ€μ λ‘ λ²κ·Έκ° μλλλ€. apiserverκ° μ΄λ€ μ΄μ λ‘ μμνμ§ λͺ»νκΈ° λλ¬Έμ λλ€.
"apiserverκ° μ΄λ€ μ΄μ λ‘ μμνμ§ λͺ»νμ΅λλ€"? μμΈν μλ €μ£Όμ€ μ μλμ??
λλ λ©°μΉ μ μ λ΄ λ¬Έμ λ₯Ό ν΄κ²°νμ΅λλ€. 1.11.4 -> 1.12.3μμ μ λ°μ΄νΈνμμμ€. λλ κ°μ§κ³ μλ€ :
api-server
- μ체 λ€νΈμν¬κ° μλ νΉμ κ°μ μΈν°νμ΄μ€μμ μ€νλ©λλ€. ( λ² μ΄ λ©ν ).kubeadm init/join
νλκ·Έ apiserver-advertise-address
μλ νΉμ μΈν°νμ΄μ€μμ μμλμμ§λ§ μ€μ /μν νμΈμ΄ μλ ν¨ν€μ§λ λΌμ°ν
ν
μ΄λΈμ νμ€ κ²½λ‘(κΈ°λ³Έ μΈν°νμ΄μ€)λ₯Ό ν΅κ³Όν©λλ€. /etc/kubernetes/manifests/kube-apiserver.yaml
bind-address
맀κ°λ³μλ₯Ό κ°μ μΈν°νμ΄μ€μ IPμ λ°μΈλ©νλ λ° λμμ΄ λμμ΅λλ€.flannel
- controller
, scheduler
ν¬λλ₯Ό μμ±ν ν λ€νΈμν¬μ λμΌν μν©μ
λλ€. api μλ² 10.96.0.1:443
clusterIP μ λν connection refused
λ‘ μΈν΄ DNS λ°°ν¬μ μ€ν¨νμ΅λλ€. (κΈ°λ³Έ λΌμ°ν
ν
μ΄λΈ) λ΄κ° μ§μ λ node-ip
νλκ·Έμ μν΄ ν΄λ¬μ€ν° λ
Έλμ --node-ip
μμ /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
κ°μ μΈν°νμ΄μ€μ IPμ.μ΄ ν λ²μ 1.12.3μ μ€λΉλ λ
Έλκ° μμ΅λλ€. κ°μ₯ μ μ©ν μ 보λ docker logs + kubectl logs
v1.13.0κ³Ό λμΌν λ¬Έμ
Kubernetes v1.13.0κ³Ό λμΌν λ¬Έμ
μΌνΈOS 7
docker-ce 18.06(μ΅μ κ²μ¦ λ²μ )
dockerd: νμ±, μ€ν μ€
kubelet: νμ±, μ€ν μ€
selinux: λΉνμ±ν
λ°©νλ²½: λΉνμ±ν
μ€λ₯:
kubelet[98023]: E1212 21:10:01.708004 98023 kubelet.go:2266] node "node1" not found
/etc/hostsμλ λ
Έλκ° ν¬ν¨λμ΄ μκ³ pingμ΄ κ°λ₯νκ³ μ°κ²°ν μ μμ΅λλ€. μ€μ λ‘ λ¨μΌ λ§μ€ν° λ¨μΌ μμ
μ(μ: μ€μΌλ λ
Έλ)λ₯Ό μνν©λλ€.
K8Sλ μ΄ κ°μ μ΄λμμ μ°Ύλκ°? /etc/hosts?
νμν κ²½μ° λ¬Έμ λ₯Ό ν΄κ²°νκ³ μΆκ° μ¦κ±°λ₯Ό μ 곡ν μ μμ΅λλ€.
--> kubeadm initκ° μλ£λκ³ λΆμ€νΈλ© ν ν°μ μΈμν©λκΉ?
κΈ΄ μ€λ₯λ‘ λλ©λλ€.
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [10.10.128.186 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [10.10.128.186 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
μ°Έκ³ : μ ν μκ° μ΄νμ μ μλ λͺ λ Ή μ€ μ무 κ²λ μ¬κΈ°μμ μΈκΈν κ°μΉκ° μλ λ΄μ©μ λ³΄κ³ νμ§ μμμ΅λλ€.
kubelet λ° kubeadm λ²μ ?
---> 1.13.0
kubeadm λ²μ : &version.Info{μ£Ό:"1", λΆ:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"201a6d", 01Z", GoVersion:"go1.11.2", μ»΄νμΌλ¬:"gc", νλ«νΌ:"linux/amd64"}
λν kube λ‘κ·Έμμ μ’ λ λͺ ννκ³ μμΈνκ² "λ Έλλ₯Ό μ°Ύμ μ μμ"λ³΄λ€ λ λμ μ€λ₯ λ©μμ§λ₯Ό μ€μ νλ©΄ μ λλμ?
κ°μ¬ ν΄μ
κ°μ λ¬Έμ ...
$ systemctl status kubelet
β kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
ββ10-kubeadm.conf
Active: active (running) since Fri 2018-12-14 19:05:47 UTC; 2min 2s ago
Docs: https://kubernetes.io/docs/home/
Main PID: 9114 (kubelet)
Tasks: 23 (limit: 4915)
CGroup: /system.slice/kubelet.service
ββ9114 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-d
Dec 14 19:07:49 pineview kubelet[9114]: E1214 19:07:49.862262 9114 kuberuntime_manager.go:657] createPodSandbox for pod "kube-scheduler-pineview_kube-system(7f99b6875de942b000954351c4a
Dec 14 19:07:49 pineview kubelet[9114]: E1214 19:07:49.862381 9114 pod_workers.go:186] Error syncing pod 7f99b6875de942b000954351c4ac09b5 ("kube-scheduler-pineview_kube-system(7f99b687
Dec 14 19:07:49 pineview kubelet[9114]: E1214 19:07:49.906855 9114 remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start san
Dec 14 19:07:49 pineview kubelet[9114]: E1214 19:07:49.906944 9114 kuberuntime_sandbox.go:65] CreatePodSandbox for pod "etcd-pineview_kube-system(b7841e48f3e7b81c3cda6872104ba3de)" fai
Dec 14 19:07:49 pineview kubelet[9114]: E1214 19:07:49.906981 9114 kuberuntime_manager.go:657] createPodSandbox for pod "etcd-pineview_kube-system(b7841e48f3e7b81c3cda6872104ba3de)" fa
Dec 14 19:07:49 pineview kubelet[9114]: E1214 19:07:49.907100 9114 pod_workers.go:186] Error syncing pod b7841e48f3e7b81c3cda6872104ba3de ("etcd-pineview_kube-system(b7841e48f3e7b81c3c
Dec 14 19:07:49 pineview kubelet[9114]: E1214 19:07:49.933627 9114 kubelet.go:2236] node "pineview" not found
Dec 14 19:07:50 pineview kubelet[9114]: E1214 19:07:50.033880 9114 kubelet.go:2236] node "pineview" not found
Dec 14 19:07:50 pineview kubelet[9114]: E1214 19:07:50.134064 9114 kubelet.go:2236] node "pineview" not found
Dec 14 19:07:50 pineview kubelet[9114]: E1214 19:07:50.184943 9114 event.go:212] Unable to write event: 'Post https://192.168.1.235:6443/api/v1/namespaces/default/events: dial tcp 192.
κ°μ λ¬Έμ :
μ°λΆν¬ 18.04.1 LTS
Kubernetes v1.13.1(cri-o 1.11 μ¬μ©)
kubernetes.ioμ μ€μΉ μ§μΉ¨μ λ°λμ΅λλ€.
https://kubernetes.io/docs/setup/independent/install-kubeadm/
https://kubernetes.io/docs/setup/cri/#cri -o
systemctl enable kubelet.service
kubeadm init --pod-network-cidr=192.168.0.0/16 --cri-socket=/var/run/crio/crio.sock
/etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.1.1 master01.mydomain.tld master01
::1 master01.mydomain.tld master01
/etc/hostname
systemctl status kubelet
β kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
ββ10-kubeadm.conf
Active: active (running) since Tue 2018-12-18 16:19:54 CET; 20min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 10148 (kubelet)
Tasks: 21 (limit: 2173)
CGroup: /system.slice/kubelet.service
ββ10148 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --resolv-conf=/run/systemd/resolve/resolv.conf
Dec 18 16:40:52 master01 kubelet[10148]: E1218 16:40:52.795313 10148 kubelet.go:2266] node "master01" not found
Dec 18 16:40:52 master01 kubelet[10148]: E1218 16:40:52.896277 10148 kubelet.go:2266] node "master01" not found
Dec 18 16:40:52 master01 kubelet[10148]: E1218 16:40:52.997864 10148 kubelet.go:2266] node "master01" not found
Dec 18 16:40:53 master01 kubelet[10148]: E1218 16:40:53.098927 10148 kubelet.go:2266] node "master01" not found
Dec 18 16:40:53 master01 kubelet[10148]: E1218 16:40:53.200355 10148 kubelet.go:2266] node "master01" not found
Dec 18 16:40:53 master01 kubelet[10148]: E1218 16:40:53.281586 10148 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.178.27:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster01limit=500&resourceVersion=0: dial tcp 192.168.178.27:6443: connect: connection refused
Dec 18 16:40:53 master01 kubelet[10148]: E1218 16:40:53.282143 10148 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.178.27:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.178.27:6443: connect: connection refused
Dec 18 16:40:53 master01 kubelet[10148]: E1218 16:40:53.283945 10148 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://192.168.178.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster01limit=500&resourceVersion=0: dial tcp 192.168.178.27:6443: connect: connection refused
Dec 18 16:40:53 master01 kubelet[10148]: E1218 16:40:53.301468 10148 kubelet.go:2266] node "master01" not found
Dec 18 16:40:53 master01 kubelet[10148]: E1218 16:40:53.402256 10148 kubelet.go:2266] node "master01" not found
@fhemberger λ΄ λ¬Έμ λ₯Ό μμ λμ΅λλ€. snap
λ₯Ό μ¬μ©νμ¬ Dockerλ₯Ό μ€μΉνμ΅λλ€. μ κ±°νκ³ apt
μ¬μ©νμ¬ λ€μ μ€μΉνλ©΄ kubeadmμ΄ μ λλ‘ μλνμ΅λλ€.
@cjbottaro λλ cri-o μΈμλ Dockerλ₯Ό μ ν μ¬μ©νμ§ μμ΅λλ€.
v1.13.1κ³Ό λμΌν λ¬Έμ
systemd λ° cri-oλ₯Ό μ¬μ©νλ κ²½μ° /var/lib/kubelet/config.yaml
μμ cgroup λλΌμ΄λ²λ‘ μ€μ ν΄μΌ ν©λλ€(λλ μλ μ€λν«μ kubeadm init --config=config.yaml
μΌλΆλ‘ μ λ¬).
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
kubelet λ‘κ·Έμμ μ΄κ²μ λ°κ²¬νλ©΄:
remote_runtime.go:96] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = cri-o configured with systemd cgroup manager, but did not receive slice as parent: /kubepods/besteffort/β¦
λλ μ€λ κ°μ λ¬Έμ λ₯Ό λ§λ¬λ€.
rm -rf /var/lib/kubelet/μ μ κ±° νκ³ λ€μ μ€μΉνμ¬ μμ νμ΅λλ€.
@JishanXing κ°μ¬ν©λλ€! μ΄κ²μ λν Raspbian Sketch liteμμ μ€ννλ λ¬Έμ λ₯Ό ν΄κ²°νμ΅λλ€.
/etc/systemd/system/kubelet.service.d/20-etcd-service-manager.confλ₯Ό μ κ±°νμ¬ μμ νμ΅λλ€.
kubeadm reset
λͺ
λ Ήμ μ¬μ©νλ κ²μ΄ μ’μ΅λλ€.
@fhemberger κ·Έκ²μ ν΄κ²°νλ λ°©λ², κ°μ μ§λ¬Έ, κ°μ¬ν©λλ€
k8μ 1.13.3μμ 1.13.4λ‘ μ
κ·Έλ μ΄λν λλ κ°μ λ¬Έμ κ° λ°μνμ΅λλ€...
/etc/kubernetes/manifests/kube-scheduler.yaml
νΈμ§ν ν ν΄κ²°ν©λλ€. μ΄λ―Έμ§ λ²μ μμ
image: k8s.gcr.io/kube-scheduler:v1.13.3 ==> image: k8s.gcr.io/kube-scheduler:v1.13.4
kube-controller-manager.yaml λ° kube-apiserver.yamlκ³Ό λμΌν©λλ€.
μ΅μ λ°©λ²μ --image-repository registry.aliyuncs.com/google_containers μ΅μ
μ μΆκ°νλ κ²μ
λλ€. λ΄ k8s λ²μ μ 1.14.0,docker λ²μ : 18.09.2,
` kubeadm μ΄κΈ°ν --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.14.0 --pod-network-cidr=192.168.0.0/16
[μ΄κΈ°ν] Kubernetes λ²μ μ¬μ©: v1.14.0
[μ€ν μ ] μ€ν μ κ²μ¬ μ€ν
[κ²½κ³ IsDockerSystemdCheck]: "cgroupfs"λ₯Ό Docker cgroup λλΌμ΄λ²λ‘ κ°μ§νμ΅λλ€. κΆμ₯ λλΌμ΄λ²λ "systemd"μ
λλ€. https://kubernetes.io/docs/setup/cri/ μ κ°μ΄λλ₯Ό λ°λ₯΄μμμ€.
[ν리νλΌμ΄νΈ] Kubernetes ν΄λ¬μ€ν° μ€μ μ νμν μ΄λ―Έμ§ κ°μ Έμ€κΈ°
[ν리νλΌμ΄νΈ] μΈν°λ· μ°κ²° μλμ λ°λΌ 1~2λΆ μ λ μμλ μ μμ΅λλ€.
[ν리νλΌμ΄νΈ] 'kubeadm config images pull'μ μ¬μ©νμ¬ λ―Έλ¦¬ μ΄ μμ
μ μνν μλ μμ΅λλ€.
[kubelet-start] "/var/lib/kubelet/kubeadm-flags.env" νμΌμ νλκ·Έκ° μλ kubelet νκ²½ νμΌ μ°κΈ°
[kubelet-start] "/var/lib/kubelet/config.yaml" νμΌμ kubelet κ΅¬μ± μ°κΈ°
[kubelet-start] kubelet μλΉμ€ νμ±ν
[certs] certificateDir ν΄λ "/etc/kubernetes/pki" μ¬μ©
[certs] "front-proxy-ca" μΈμ¦μ λ° ν€ μμ±
[certs] "front-proxy-client" μΈμ¦μ λ° ν€ μμ±
[certs] "ca" μΈμ¦μ λ° ν€ μμ±
[certs] "apiserver" μΈμ¦μ λ° ν€ μμ±
[μΈμ¦μ] apiserver μλΉ μΈμ¦μλ DNS μ΄λ¦ [jin-virtual-machine kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] λ° IP [10.96.0.1 192.168.232.130]μ λν΄ μλͺ
λμμ΅λλ€.
[certs] "apiserver-kubelet-client" μΈμ¦μ λ° ν€ μμ±
[certs] "etcd/ca" μΈμ¦μ λ° ν€ μμ±
[certs] "etcd/server" μΈμ¦μ λ° ν€ μμ±
[μΈμ¦μ] etcd/server μλΉ μΈμ¦μλ DNS μ΄λ¦ [jin-virtual-machine localhost] λ° IP [192.168.232.130 127.0.0.1 ::1]μ λν΄ μλͺ
λμμ΅λλ€.
[certs] "etcd/peer" μΈμ¦μ λ° ν€ μμ±
[μΈμ¦μ] etcd/νΌμ΄ μλΉμ€ μΈμ¦μλ DNS μ΄λ¦ [jin-virtual-machine localhost] λ° IP [192.168.232.130 127.0.0.1 ::1]μ λν΄ μλͺ
λμμ΅λλ€.
[certs] "apiserver-etcd-client" μΈμ¦μ λ° ν€ μμ±
[certs] "etcd/healthcheck-client" μΈμ¦μ λ° ν€ μμ±
[certs] "sa" ν€ λ° κ³΅κ° ν€ μμ±
[kubeconfig] kubeconfig ν΄λ "/etc/kubernetes" μ¬μ©
[kubeconfig] "admin.conf" kubeconfig νμΌ μμ±
[kubeconfig] "kubelet.conf" kubeconfig νμΌ μμ±
[kubeconfig] "controller-manager.conf" kubeconfig νμΌ μμ±
[kubeconfig] "scheduler.conf" kubeconfig νμΌ μμ±
[μ μ΄ νλ©΄] 맀λνμ€νΈ ν΄λ "/etc/kubernetes/manifests" μ¬μ©
[μ μ΄ νλ©΄] "kube-apiserver"μ λν μ μ ν¬λ 맀λνμ€νΈ μμ±
[μ μ΄ νλ©΄] "kube-controller-manager"μ λν μ μ Pod 맀λνμ€νΈ λ§λ€κΈ°
[μ μ΄ νλ©΄] "kube-scheduler"μ λν μ μ ν¬λ 맀λνμ€νΈ λ§λ€κΈ°
[etcd] "/etc/kubernetes/manifests"μμ λ‘컬 etcdμ λν μ μ Pod 맀λνμ€νΈ λ§λ€κΈ°
[wait-control-plane] kubeletμ΄ "/etc/kubernetes/manifests" λλ ν 리μμ μ μ νλλ‘ μ»¨νΈλ‘€ νλ μΈμ λΆν
νκΈ°λ₯Ό κΈ°λ€λ¦½λλ€. μ΅λ 4m0μ΄κ° μμλ μ μμ΅λλ€.
[apiclient] λͺ¨λ μ μ΄ νλ©΄ κ΅¬μ± μμλ 17.004356μ΄ νμ μ μμ
λλ€.
[upload-config] ConfigMap "kubeadm-config"μμ μ¬μ©λ ꡬμ±μ "kube-system" λ€μμ€νμ΄μ€μ μ μ₯
[kubelet] ν΄λ¬μ€ν°μ kubeletμ λν ꡬμ±μ μ¬μ©νμ¬ λ€μμ€νμ΄μ€ kube-systemμ ConfigMap "kubelet-config-1.14" μμ±
[upload-certs] 건λλ°λ λ¨κ³μ
λλ€. --experimental-upload-certsλ₯Ό μ°Έμ‘°νμμμ€.
[mark-control-plane] "node-role.kubernetes.io/master=''" λ μ΄λΈμ μΆκ°νμ¬ λ
Έλ jin-virtual-machineμ μ μ΄ νλ©΄μΌλ‘ νμ
[mark-control-plane] taintsλ₯Ό μΆκ°νμ¬ λ
Έλ jin-virtual-machineμ μ μ΄ νλ©΄μΌλ‘ νμ [node-role.kubernetes.io/ master:NoSchedule ]
[bootstrap-token] ν ν° μ¬μ©: xucir0.o4kzo3qqjyjnzphl
[bootstrap-token] λΆνΈμ€νΈλ© ν ν°, ν΄λ¬μ€ν° μ 보 ConfigMap, RBAC μν ꡬμ±
[bootstrap-token] λ
Έλκ° μ₯κΈ° μΈμ¦μ μ격 μ¦λͺ
μ μ»κΈ° μν΄ λ
Έλ λΆνΈμ€νΈλ© ν ν°μ΄ CSRμ κ²μν μ μλλ‘ RBAC κ·μΉμ ꡬμ±νμ΅λλ€.
[bootstrap-token] csrapprover 컨νΈλ‘€λ¬κ° λ
Έλ λΆνΈμ€νΈλ© ν ν°μμ CSRμ μλμΌλ‘ μΉμΈνλλ‘ RBAC κ·μΉμ ꡬμ±νμ΅λλ€.
[bootstrap-token] ν΄λ¬μ€ν°μ λͺ¨λ λ
Έλ ν΄λΌμ΄μΈνΈ μΈμ¦μμ λν μΈμ¦μ κ΅μ²΄λ₯Ό νμ©νλλ‘ κ΅¬μ±λ RBAC κ·μΉ
[bootstrap-token] "kube-public" λ€μμ€νμ΄μ€μ "cluster-info" ConfigMap μμ±
[μ λμ¨] μ μ© νμ μ λμ¨: CoreDNS
[μ λμ¨] μ μ© νμ μ λμ¨: kube-proxy
Kubernetes μ μ΄ μμμ΄ μ±κ³΅μ μΌλ‘ μ΄κΈ°νλμμ΅λλ€!
ν΄λ¬μ€ν° μ¬μ©μ μμνλ €λ©΄ μΌλ° μ¬μ©μλ‘ λ€μμ μ€νν΄μΌ ν©λλ€.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
μ΄μ ν΄λ¬μ€ν°μ ν¬λ λ€νΈμν¬λ₯Ό λ°°ν¬ν΄μΌ ν©λλ€.
λ€μμ λμ΄λ μ΅μ
μ€ νλλ₯Ό μ¬μ©νμ¬ "kubectl apply -f [podnetwork].yaml"μ μ€νν©λλ€.
https://kubernetes.io/docs/concepts/cluster-administration/addons/
κ·Έλ° λ€μ κ° μμ μ λ Έλμμ 루νΈλ‘ λ€μμ μ€ννμ¬ μνλ μμ μμ μ λ Έλμ μ‘°μΈν μ μμ΅λλ€.
kubeadm μ‘°μΈ 192.168.232.130:6443 --token xucir0.o4kzo3qqjyjnzphl
--discovery-token-ca-cert-hash sha256:022048b22926a2cb2f8295ce2e3f1f6fa7ffe1098bc116f7d304a26bcb78656
`
GCP Ubuntu 18.04 VMμμ kubernetes v1.14.1 λ° cri-o v1.14.0κ³Ό λμΌν λ¬Έμ κ° λ°μνμ΅λλ€. λ컀λ₯Ό μ¬μ©ν λ λ¬Έμ κ° ν΄κ²°λμμ΅λλ€. μ°Έμ‘°: https://github.com/cri-o/cri-o/issues/2357
λ΄ λ¬Έμ λ λ€λ₯Έ cgroup λλΌμ΄λ²μ μμμ΅λλ€. CRIOλ κΈ°λ³Έμ μΌλ‘ systemdλ₯Ό μ¬μ©νκ³ kubeletμ κΈ°λ³Έμ μΌλ‘ cgroupfsλ₯Ό μ¬μ©ν©λλ€.
cat /etc/crio/crio.conf | grep cgroup
# cgroup_manager is the cgroup management implementation to be used
cgroup_manager = "systemd"
μ΄ κ²½μ° https://kubernetes.io/docs/setup/independent/install-kubeadm/#configure -cgroup-driver-used-by-kubelet-on-master-nodeλ₯Ό μ°Έμ‘°νμμμ€.
νμΌλ§ μ°μΈμ
echo "KUBELET_EXTRA_ARGS=--cgroup-driver=systemd" > /etc/default/kubelet
κ·Έλ° λ€μ kubeadm initλ₯Ό μ€ννμμμ€. λλ cgroup_managerλ₯Ό cgroupfs
dockerμ λ¬λ¦¬ cri-o λ° containerdλ cgroup λλΌμ΄λ² κ°μ§ μΈ‘λ©΄μμ κ΄λ¦¬νκΈ°κ° μ½κ° λ κΉλ€λ‘μ§λ§ kubeadmμμ μ΄λ₯Ό μ§μν κ³νμ΄ μμ΅λλ€.
λ컀λ μ΄λ―Έ μ²λ¦¬λμμ΅λλ€.
λ°λΌμ λΆλͺ ν ν΄κ²°μ± μ μμ§λ§ ν΄λ¬μ€ν° $(yes | kubeadm reset)λ₯Ό μ¬μ€μ νλ κ² μΈμλ μ μκ°μλ ν΄κ²°μ± μ΄ μλλλ€!
μ΄λ―Έμ§ 리ν¬μ§ν 리λ₯Ό λ³κ²½νλ©΄ μ μκ² ν¨κ³Όμ μ΄μ§λ§ μ΄κ²μ μ’μ ν΄κ²°μ±
μ΄ μλλλ€.
--image-repository registry.aliyuncs.com/google_containers
λ΄ κ²½μ°λ μ΄κ²μΌλ‘ μλνμ΅λλ€.
sed -i '/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
λλ κ°μ λ¬Έμ κ° μμ΅λλ€. λλ kubeadm init --config=init-config.yaml
νκ³ μ€ν¨νκ³ , μ΄ νμΌμ kubeadmμ μν΄ μμ±λμμ΅λλ€. advertiseAddress κΈ°λ³Έκ°μ file μ 1.2.3.4 νλκ° μμΌλ©°, μ΄λ‘ μΈν΄ etcd 컨ν
μ΄λ μμμ΄ μ€ν¨ν©λλ€. 127.0.0.1λ‘ λ³κ²½νλ©΄ etcd contianerκ° μ±κ³΅μ μΌλ‘ μμλκ³ kubeadm init success
μ΄ λ¬Έμ λ₯Ό ν΄κ²°νλ €λ©΄ docker ps -a
list all container checkλ₯Ό μ¬μ©νμ¬ μΌλΆκ° μ’
λ£λλ©΄ docker logs CONTIANER_ID
μ¬μ©νμ¬ λ¬΄μ¨ μΌμ΄ μΌμ΄λ¬λμ§ νμΈνμμμ€. λμμ΄ λκΈ°λ₯Ό λ°λλλ€
μ¬λ¬λΆ, ν΄κ²° λ°©λ² μμΌμ λΆ κ³μ κ°μ? μ¬κΈ°μλ κ°μ λ¬Έμ μ§λ§ k3sλ₯Ό μ¬μ©ν©λλ€.
@MateusMac k3μ λν λ²κ·Έ λ³΄κ³ μλ μ΄μ΄μΌ ν©λλ€.
kubeadm
λ₯Ό μ»κΈ° μν΄ μΌμ£ΌμΌ λμ μΌνμ΅λλ€.
μ°λΆν¬ 18.04
λ컀 18.06-2-ce
k8s 1.15.1
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Fails with:
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
kubelet λ‘κ·Έλ 1루λ₯Ό μ§λκ° λ Έλλ₯Ό μ°Ύμ μ μλ€λ κ²μ 보μ¬μ€λλ€.
warproot<strong i="15">@warp02</strong>:~$ systemctl status kubelet
β kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
ββ10-kubeadm.conf
Active: active (running) since Sun 2019-08-04 18:22:26 AEST; 5min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 12569 (kubelet)
Tasks: 27 (limit: 9830)
CGroup: /system.slice/kubelet.service
ββ12569 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-dri
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.322762 12569 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "kube-scheduler-warp02_kube-system(ecae9d12d3610192347be3d1aa5aa552)"
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.322806 12569 kuberuntime_manager.go:692] createPodSandbox for pod "kube-scheduler-warp02_kube-system(ecae9d12d3610192347be3d1aa5aa552)
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.322872 12569 pod_workers.go:190] Error syncing pod ecae9d12d3610192347be3d1aa5aa552 ("kube-scheduler-warp02_kube-system(ecae9d12d36101
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.373094 12569 kubelet.go:2248] node "warp02" not found
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.375587 12569 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https://10.1.1.4:6443
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.473295 12569 kubelet.go:2248] node "warp02" not found
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.573567 12569 kubelet.go:2248] node "warp02" not found
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.575495 12569 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://10.1.1.4:6
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.590886 12569 event.go:249] Unable to write event: 'Post https://10.1.1.4:6443/api/v1/namespaces/default/events: dial tcp 10.1.1.4:6443
Aug 04 18:28:03 warp02 kubelet[12569]: E0804 18:28:03.673767 12569 kubelet.go:2248] node "warp02" not found
μ΄λ¬ν λ² μ΄ λ©ν μμ€ν μ μ¬λ¬ NICκ° μλ€λ μ μ μ μν΄μΌ ν©λλ€.
warproot<strong i="6">@warp02</strong>:~$ ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:feff:fe65:37f prefixlen 64 scopeid 0x20<link>
ether 02:42:fe:65:03:7f txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6 bytes 516 (516.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp35s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.0.2 netmask 255.255.255.0 broadcast 10.0.0.255
inet6 fe80::32b5:c2ff:fe02:410b prefixlen 64 scopeid 0x20<link>
ether 30:b5:c2:02:41:0b txqueuelen 1000 (Ethernet)
RX packets 46 bytes 5821 (5.8 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 70 bytes 7946 (7.9 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp6s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.1.4 netmask 255.255.255.0 broadcast 10.1.1.255
inet6 fd42:59ff:1166:0:25a7:3617:fee6:424e prefixlen 64 scopeid 0x0<global>
inet6 fe80::1a03:73ff:fe44:5694 prefixlen 64 scopeid 0x20<link>
inet6 fd9e:fdd6:9e01:0:1a03:73ff:fe44:5694 prefixlen 64 scopeid 0x0<global>
ether 18:03:73:44:56:94 txqueuelen 1000 (Ethernet)
RX packets 911294 bytes 1361047672 (1.3 GB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 428759 bytes 29198065 (29.1 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 17
ib0: flags=4099<UP,BROADCAST,MULTICAST> mtu 4092
unspec A0-00-02-10-FE-80-00-00-00-00-00-00-00-00-00-00 txqueuelen 256 (UNSPEC)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ib1: flags=4099<UP,BROADCAST,MULTICAST> mtu 4092
unspec A0-00-02-20-FE-80-00-00-00-00-00-00-00-00-00-00 txqueuelen 256 (UNSPEC)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 25473 bytes 1334779 (1.3 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 25473 bytes 1334779 (1.3 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
κ·Έκ² λ¬Έμ μΈμ§ λͺ¨λ₯΄κ² μ§λ§ /etc/hosts
νμΌμ λ€μκ³Ό κ°μ΄ μ€μ νμ΅λλ€.
warproot<strong i="7">@warp02</strong>:~$ cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
# add our host name
10.1.1.4 warp02 warp02.ad.xxx.com
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
# add our ipv6 host name
fd42:59ff:1166:0:25a7:3617:fee6:424e warp02 warp02.ad.xxx.com
warproot<strong i="8">@warp02</strong>:~$
λ°λΌμ NIC 10.1.1.4λ₯Ό k8sμ "λ€νΈμν¬"λ‘ λ³΄λ κ²μ΄ μ€μ (μ μκ°μλ)μ λλ€.
node-nameμ λν nslookupμ΄ μ λλ‘ μλνλ κ² κ°μ΅λλ€.
warproot<strong i="13">@warp02</strong>:~$ nslookup warp02
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: warp02.ad.xxx.com
Address: 10.1.1.4
Name: warp02.ad.xxx.com
Address: fd42:59ff:1166:0:25a7:3617:fee6:424e
warproot<strong i="14">@warp02</strong>:~$
kubeadm
μ€μΉ λ¬Έμλ₯Ό μ¬λ¬ λ² μ΄ν΄λ³΄μμ΅λλ€.
κΈ°μ΄ ν. λ€νΈμν¬λ₯Ό μ°Ύμ§ λͺ»ν λΏμ λλ€.
λΉν©.
1.15.3
λ²μ μ κ²½μ° λ€μμ μΆκ°νμ¬ Ubuntu 18.04μμ μ΄ λ¬Έμ λ₯Ό ν΄κ²°ν μ μμμ΅λλ€.
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cgroup-driver: "systemd"
λ΄ kubeadm ꡬμ±μΌλ‘ μ΄λν λ€μ kubeadm init
Ubuntu 18.04μμ λ²μ 1.15.3κ³Ό λμΌν λ¬Έμ κ° μμ΅λλ€.
@kris-nova μ΄ κ΅¬μ± νμΌμ μμΉλ₯Ό ββμ§μ ν΄ μ£Όμλ©΄ μ λ§ κ°μ¬νκ² μ΅λλ€ :-)
μ
λ°μ΄νΈ: μ΄μ λ μ μ μμ§λ§ μ§κΈμ ꡬμ±μ λ³κ²½νμ§ μκ³ λ μλν©λλ€!
(μ°Έκ³ : κ΄λ ¨μ΄ μλμ§λ λͺ¨λ₯΄κ² μ§λ§ kubeadm init
λ₯Ό λ€μ μλνκΈ° μ μ v.19.03.1μμ v.19.03.2λ‘ dockerλ₯Ό μ
λ°μ΄νΈνμ΅λλ€.)
kubeadm init μ¦ nodexxλ₯Ό μ°Ύμ μ μμμ μ€ννλ λμ μλ μ€λ₯κ° λ°μνμ΅λλ€.
[ root@node01 ~]# journalctl -xeu kubelet
Nov 07 10:34:02 node01 kubelet[2968]: E1107 10:34:02.682095 2968 kubelet.go:2267] λ
Έλ "node01"μ μ°Ύμ μ μμ
Nov 07 10:34:02 node01 kubelet[2968]: E1107 10:34:02.782554 2968 kubelet.go:2267] λ
Έλ "node01"μ μ°Ύμ μ μμ
Nov 07 10:34:02 node01 kubelet[2968]: E1107 10:34:02.829142 2968 reflector.go:123] k8s.io/client-go/informers/factory.go:134: *v1beta1.CSIDλ₯Ό λμ΄νμ§ λͺ»νμ΅λλ€.
Nov 07 10:34:02 node01 kubelet[2968]: E1107 10:34:02.884058 2968 kubelet.go:2267] λ
Έλ "node01"μ μ°Ύμ μ μμ
Nov 07 10:34:02 node01 kubelet[2968]: E1107 10:34:02.984510 2968 kubelet.go:2267] λ
Έλ "node01"μ μ°Ύμ μ μμ
Nov 07 10:34:03 node01 kubelet[2968]: E1107 10:34:03.030884 2968 reflector.go:123]
ν΄κ²° λ°©λ²:
κ°μ λ¬Έμ
μ κ²½μ°μλ λ§μ€ν° λ
Έλμ μκ° λ리ννΈλ‘ μΈν΄ _μ μμ΄ μ°¨λ¨λ ν λ°μνμ΅λλ€.
λλ κ·Έκ²μ μ€ννμ¬ κ³ μ³€λ€.
# Correcting the time as mentioned here https://askubuntu.com/a/254846/861548
sudo service ntp stop
sudo ntpdate -s time.nist.gov
sudo service ntp start
# Then restarting the kubelet
sudo systemctl restart kubelet.service
# I also had to run daemon-reload as I got the following warning
# Warning: The unit file, source configuration file or drop-ins of kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units.
sudo systemctl daemon-reload
# I also made another restart, which I don't know whether needed or not
sudo systemctl restart kubelet.service
λμΌν λ¬Έμ λ₯Ό node "xxxx" not found
νμ΅λλ€. 컨ν
μ΄λ λ‘κ·Έκ° docker logs container_id λ₯Ό μ¬μ©νλμ§ νμΈν λ€μ apiserverκ° 127.0.0.1:2379 μ°κ²°μ μλνκ³ νμΌ νΈμ§ Β· /etc/kubernetes/manifests/etcd.yaml , λ€μ μμ, λ¬Έμ ν΄κ²° γ
κ°μ₯ μ μ©ν λκΈ
Kubernetes v1.13.0κ³Ό λμΌν λ¬Έμ
μΌνΈOS 7
docker-ce 18.06(μ΅μ κ²μ¦ λ²μ )
dockerd: νμ±, μ€ν μ€
kubelet: νμ±, μ€ν μ€
selinux: λΉνμ±ν
λ°©νλ²½: λΉνμ±ν
μ€λ₯:
kubelet[98023]: E1212 21:10:01.708004 98023 kubelet.go:2266] node "node1" not found
/etc/hostsμλ λ Έλκ° ν¬ν¨λμ΄ μκ³ pingμ΄ κ°λ₯νκ³ μ°κ²°ν μ μμ΅λλ€. μ€μ λ‘ λ¨μΌ λ§μ€ν° λ¨μΌ μμ μ(μ: μ€μΌλ λ Έλ)λ₯Ό μνν©λλ€.
K8Sλ μ΄ κ°μ μ΄λμμ μ°Ύλκ°? /etc/hosts?
νμν κ²½μ° λ¬Έμ λ₯Ό ν΄κ²°νκ³ μΆκ° μ¦κ±°λ₯Ό μ 곡ν μ μμ΅λλ€.
--> kubeadm initκ° μλ£λκ³ λΆμ€νΈλ© ν ν°μ μΈμν©λκΉ?
κΈ΄ μ€λ₯λ‘ λλ©λλ€.
μ°Έκ³ : μ ν μκ° μ΄νμ μ μλ λͺ λ Ή μ€ μ무 κ²λ μ¬κΈ°μμ μΈκΈν κ°μΉκ° μλ λ΄μ©μ λ³΄κ³ νμ§ μμμ΅λλ€.
kubelet λ° kubeadm λ²μ ?
---> 1.13.0
kubeadm λ²μ : &version.Info{μ£Ό:"1", λΆ:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"201a6d", 01Z", GoVersion:"go1.11.2", μ»΄νμΌλ¬:"gc", νλ«νΌ:"linux/amd64"}
λν kube λ‘κ·Έμμ μ’ λ λͺ ννκ³ μμΈνκ² "λ Έλλ₯Ό μ°Ύμ μ μμ"λ³΄λ€ λ λμ μ€λ₯ λ©μμ§λ₯Ό μ€μ νλ©΄ μ λλμ?
κ°μ¬ ν΄μ