Kubeadm: λŸ°νƒ€μž„ λ„€νŠΈμ›Œν¬κ°€ μ€€λΉ„λ˜μ§€ μ•ŠμŒ: NetworkReady=false 이유:NetworkPluginNotReady λ©”μ‹œμ§€: 도컀: λ„€νŠΈμ›Œν¬ ν”ŒλŸ¬κ·ΈμΈμ΄ μ€€λΉ„λ˜μ§€ μ•ŠμŒ: cni ꡬ성이 μ΄ˆκΈ°ν™”λ˜μ§€ μ•ŠμŒ

에 λ§Œλ“  2018λ…„ 08μ›” 02일  Β·  65μ½”λ©˜νŠΈ  Β·  좜처: kubernetes/kubeadm

이것은 버그 λ³΄κ³ μ„œμž…λ‹ˆκΉŒ μ•„λ‹ˆλ©΄ κΈ°λŠ₯ μš”μ²­μž…λ‹ˆκΉŒ?

버그 보고

  • 이 κ°€μ΄λ“œλ₯Ό λ”°λžμŠ΅λ‹ˆλ‹€.
  • 96 CPU ARM64 μ„œλ²„μ— λ§ˆμŠ€ν„° λ…Έλ“œλ₯Ό μ„€μΉ˜ν–ˆμŠ΅λ‹ˆλ‹€.
  • OSλŠ” Ubuntu 18.04 LTSμž…λ‹ˆλ‹€. apt-get update/upgrade λ°”λ‘œ 뒀에 .
  • 쀑고 kubeadm init --pod-network-cidr=10.244.0.0/16 . 그런 λ‹€μŒ μ œμ•ˆλœ λͺ…령을 μ‹€ν–‰ν–ˆμŠ΅λ‹ˆλ‹€.
  • μ„ νƒλœ ν”Œλž€λ„¬ ν¬λ“œ λ„€νŠΈμ›Œν¬:

    • sysctl net.bridge.bridge-nf-call-iptables=1 .

    • wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml .

    • vim kube-flannel.yml , amd64 λ₯Ό arm64

    • kubectl apply -f kube-flannel.yml .

    • kubectl get pods --all-namespaces :

NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-ls44z                   1/1       Running   0          20m
kube-system   coredns-78fcdf6894-njnnt                   1/1       Running   0          20m
kube-system   etcd-devstats.team.io                      1/1       Running   0          20m
kube-system   kube-apiserver-devstats.team.io            1/1       Running   0          20m
kube-system   kube-controller-manager-devstats.team.io   1/1       Running   0          20m
kube-system   kube-flannel-ds-v4t8s                      1/1       Running   0          13m
kube-system   kube-proxy-5825g                           1/1       Running   0          20m
kube-system   kube-scheduler-devstats.team.io            1/1       Running   0          20m

그런 λ‹€μŒ kubeadm init 좜λ ₯을 μ‚¬μš©ν•˜μ—¬ 두 개의 AMD64 λ…Έλ“œλ₯Ό κ²°ν•©ν–ˆμŠ΅λ‹ˆλ‹€.
첫 번째 λ…Έλ“œ:

[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0802 10:26:49.987467   16652 kernel_validator.go:81] Validating kernel version
I0802 10:26:49.987709   16652 kernel_validator.go:96] Validating kernel config
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "147.75.97.234:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://147.75.97.234:6443"
[discovery] Requesting info from "https://147.75.97.234:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "147.75.97.234:6443"
[discovery] Successfully established connection with API Server "147.75.97.234:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "cncftest.io" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

2nd node:

[preflight] running pre-flight checks
    [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

I0802 10:26:58.913060   38617 kernel_validator.go:81] Validating kernel version
I0802 10:26:58.913222   38617 kernel_validator.go:96] Validating kernel config
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "147.75.97.234:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://147.75.97.234:6443"
[discovery] Requesting info from "https://147.75.97.234:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "147.75.97.234:6443"
[discovery] Successfully established connection with API Server "147.75.97.234:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "devstats.cncf.io" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

κ·ΈλŸ¬λ‚˜ λ§ˆμŠ€ν„° kubectl get nodes :

NAME               STATUS     ROLES     AGE       VERSION
cncftest.io        NotReady   <none>    7m        v1.11.1
devstats.cncf.io   NotReady   <none>    7m        v1.11.1
devstats.team.io   Ready      master    21m       v1.11.1

그리고 kubectl describe nodes (λ§ˆμŠ€ν„°λŠ” devstats.team.io , λ…Έλ“œλŠ” cncftest.io 및 devstats.cncf.io ):

Name:               cncftest.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=cncftest.io
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 10:26:53 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:26:52 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.205.79
  Hostname:    cncftest.io
Capacity:
 cpu:                48
 ephemeral-storage:  459266000Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             264047752Ki
 pods:               110
Allocatable:
 cpu:                48
 ephemeral-storage:  423259544900
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             263945352Ki
 pods:               110
System Info:
 Machine ID:                 d1c2fc94ee6d41ca967c4d43504af50c
 System UUID:                4C4C4544-0052-3310-804A-B7C04F4E4432
 Boot ID:                    d87670d9-251e-42a5-90c5-5d63059f03ab
 Kernel Version:             4.15.0-22-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.12.1-ce
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
PodCIDR:                     10.244.1.0/24
Non-terminated Pods:         (0 in total)
  Namespace                  Name    CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----    ------------  ----------  ---------------  -------------
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       0 (0%)    0 (0%)
  memory    0 (0%)    0 (0%)
Events:
  Type    Reason                   Age              From                  Message
  ----    ------                   ----             ----                  -------
  Normal  Starting                 8m               kubelet, cncftest.io  Starting kubelet.
  Normal  NodeHasSufficientDisk    8m (x2 over 8m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  8m (x2 over 8m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    8m (x2 over 8m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     8m (x2 over 8m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  8m               kubelet, cncftest.io  Updated Node Allocatable limit across pods


Name:               devstats.cncf.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=devstats.cncf.io
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 10:27:00 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 10:34:51 +0000   Thu, 02 Aug 2018 10:27:00 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.78.47
  Hostname:    devstats.cncf.io
Capacity:
 cpu:                48
 ephemeral-storage:  142124052Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             264027220Ki
 pods:               110
Allocatable:
 cpu:                48
 ephemeral-storage:  130981526107
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             263924820Ki
 pods:               110
System Info:
 Machine ID:                 d1c2fc94ee6d41ca967c4d43504af50c
 System UUID:                00000000-0000-0000-0000-0CC47AF37CF2
 Boot ID:                    f257b606-5da2-43fd-8782-0aa4484037f4
 Kernel Version:             4.15.0-20-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.12.1-ce
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
PodCIDR:                     10.244.2.0/24
Non-terminated Pods:         (0 in total)
  Namespace                  Name    CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----    ------------  ----------  ---------------  -------------
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       0 (0%)    0 (0%)
  memory    0 (0%)    0 (0%)
Events:
  Type    Reason                   Age   From                       Message
  ----    ------                   ----  ----                       -------
  Normal  Starting                 7m    kubelet, devstats.cncf.io  Starting kubelet.
  Normal  NodeHasSufficientDisk    7m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  7m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    7m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     7m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  7m    kubelet, devstats.cncf.io  Updated Node Allocatable limit across pods


Name:               devstats.team.io
Roles:              master
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=devstats.team.io
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data={"VtepMAC":"9a:7f:81:2c:4e:16"}
                    flannel.alpha.coreos.com/backend-type=vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager=true
                    flannel.alpha.coreos.com/public-ip=147.75.97.234
                    kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 10:12:56 +0000
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:12:56 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:12:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:12:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:12:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 02 Aug 2018 10:34:49 +0000   Thu, 02 Aug 2018 10:21:07 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  147.75.97.234
  Hostname:    devstats.team.io
Capacity:
 cpu:                96
 ephemeral-storage:  322988584Ki
 hugepages-2Mi:      0
 memory:             131731468Ki
 pods:               110
Allocatable:
 cpu:                96
 ephemeral-storage:  297666278522
 hugepages-2Mi:      0
 memory:             131629068Ki
 pods:               110
System Info:
 Machine ID:                 5eaa89a81ff348399284bb4cb016ffd7
 System UUID:                10000000-FAC5-FFFF-A81D-FC15B4970493
 Boot ID:                    43b920e3-34e7-4de3-aa6c-8b5c525363ff
 Kernel Version:             4.15.0-20-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               arm64
 Container Runtime Version:  docker://17.12.1-ce
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
PodCIDR:                     10.244.0.0/24
Non-terminated Pods:         (8 in total)
  Namespace                  Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                        ------------  ----------  ---------------  -------------
  kube-system                coredns-78fcdf6894-ls44z                    100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)
  kube-system                coredns-78fcdf6894-njnnt                    100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)
  kube-system                etcd-devstats.team.io                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-devstats.team.io             250m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-devstats.team.io    200m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-flannel-ds-v4t8s                       100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)
  kube-system                kube-proxy-5825g                            0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-devstats.team.io             100m (0%)     0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests    Limits
  --------  --------    ------
  cpu       850m (0%)   100m (0%)
  memory    190Mi (0%)  390Mi (0%)
Events:
  Type    Reason                   Age                From                          Message
  ----    ------                   ----               ----                          -------
  Normal  Starting                 23m                kubelet, devstats.team.io     Starting kubelet.
  Normal  NodeAllocatableEnforced  23m                kubelet, devstats.team.io     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientPID     23m (x5 over 23m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientDisk    23m (x6 over 23m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  23m (x6 over 23m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    23m (x6 over 23m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasNoDiskPressure
  Normal  Starting                 21m                kube-proxy, devstats.team.io  Starting kube-proxy.
  Normal  NodeReady                13m                kubelet, devstats.team.io     Node devstats.team.io status is now: NodeReady

버전

kubeadm 버전 ( kubeadm version ):

kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}

ν™˜κ²½ :

  • Kubernetes 버전 ( kubectl version ):
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
  • ν΄λΌμš°λ“œ 제곡자 λ˜λŠ” ν•˜λ“œμ›¨μ–΄ ꡬ성 :
  • λ§ˆμŠ€ν„°: λ² μ–΄λ©”νƒˆ μ„œλ²„ 96μ½”μ–΄, ARM64, 128G RAM, μŠ€μ™‘μ΄ κΊΌμ Έ μžˆμŠ΅λ‹ˆλ‹€.
  • λ…Έλ“œ(2): λ² μ–΄λ©”νƒˆ μ„œλ²„ 48μ½”μ–΄, AMD64, 256G RAM, μŠ€μ™‘μ΄ 꺼진 x 2.
  • uname -a : Linux devstats.team.io 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:20 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux
  • OS (예: /etc/os-release):
NAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
  • lsb_release -a :
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04.1 LTS
Release:    18.04
Codename:   bionic
  • 컀널 (예: uname -a ): Linux devstats.team.io 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:20 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux
  • 기타 : docker version :
docker version
Client:
 Version:   17.12.1-ce
 API version:   1.35
 Go version:    go1.10.1
 Git commit:    7390fc6
 Built: Wed Apr 18 01:26:37 2018
 OS/Arch:   linux/arm64

Server:
 Engine:
  Version:  17.12.1-ce
  API version:  1.35 (minimum version 1.12)
  Go version:   go1.10.1
  Git commit:   7390fc6
  Built:    Wed Feb 28 17:46:05 2018
  OS/Arch:  linux/arm64
  Experimental: false

무슨 μΌμ΄μ—μš”?

μ •ν™•ν•œ 였λ₯˜λŠ” λ‹€μŒκ³Ό κ°™μŠ΅λ‹ˆλ‹€.

runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

λ…Έλ“œμ—μ„œ: cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

이 μŠ€λ ˆλ“œμ—μ„œ( KUBELET_NETWORK_ARGS ).

  • λ…Έλ“œμ˜ journalctl -xe :
Aug 02 10:44:51 devstats.cncf.io kubelet[38796]: W0802 10:44:51.040663   38796 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 10:44:51 devstats.cncf.io kubelet[38796]: E0802 10:44:51.040876   38796 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

/etc/cni/net.d 디렉토리가 μ‘΄μž¬ν•˜μ§€λ§Œ λΉ„μ–΄ μžˆμŠ΅λ‹ˆλ‹€.

무슨 일이 일어날 것이라고 μ˜ˆμƒν–ˆμŠ΅λ‹ˆκΉŒ?

Ready μƒνƒœμ˜ λͺ¨λ“  λ…Έλ“œ.

μž¬ν˜„ν•˜λŠ” 방법(κ°€λŠ₯ν•œ ν•œ μ΅œμ†Œν•œμœΌλ‘œ μ •ν™•ν•˜κ²Œ)?

νŠœν† λ¦¬μ–Όμ˜ 단계λ₯Ό λ”°λ₯΄μ„Έμš”. 3번 μ‹œλ„ν–ˆλŠ”λ° 항상 λ°œμƒν•©λ‹ˆλ‹€.

μš°λ¦¬κ°€ μ•Œμ•„μ•Ό ν•  λ‹€λ₯Έ 것이 μžˆμŠ΅λ‹ˆκΉŒ?

λ§ˆμŠ€ν„°λŠ” ARM64, 2λ…Έλ“œλŠ” AMD64μž…λ‹ˆλ‹€.
λ§ˆμŠ€ν„°μ™€ ν•˜λ‚˜μ˜ λ…Έλ“œλŠ” μ•”μŠ€ν…Œλ₯΄λ‹΄μ— 있고 두 번째 λ…Έλ“œλŠ” 미ꡭ에 μžˆμŠ΅λ‹ˆλ‹€.

kubectl taint nodes --all node-role.kubernetes.io/master- λ₯Ό μ‚¬μš©ν•˜μ—¬ λ§ˆμŠ€ν„°μ—μ„œ ν¬λ“œλ₯Ό μ‹€ν–‰ν•  수 μžˆμ§€λ§Œ 이것은 μ†”λ£¨μ…˜μ΄ μ•„λ‹™λ‹ˆλ‹€. μž‘μ—…ν•  μ‹€μ œ 닀쀑 λ…Έλ“œ ν΄λŸ¬μŠ€ν„°λ₯Ό κ°–κ³  μ‹ΆμŠ΅λ‹ˆλ‹€.

areecosystem prioritawaiting-more-evidence

κ°€μž₯ μœ μš©ν•œ λŒ“κΈ€

@lukasredynk

예, κ²°κ΅­ 이것은 μ•„μΉ˜ λ¬Έμ œμž…λ‹ˆλ‹€. ν™•μΈν•΄μ£Όμ…”μ„œ κ°μ‚¬ν•©λ‹ˆλ‹€.
직쑰 λ¬Έμ œκ°€ μ ‘μ„  문제처럼 보이기 λ•Œλ¬Έμ— μ—¬κΈ°μ—μ„œ ν”Œλž€λ„¬μ— μ΄ˆμ μ„ λ§žμΆ”κ² μŠ΅λ‹ˆλ‹€.

아직 보지 λͺ»ν•œ 경우 μ»¨ν…μŠ€νŠΈμ— λŒ€ν•΄ @luxas κ°€ 이것을 λ³΄μ‹­μ‹œμ˜€.
https://github.com/luxas/kubeadm-workshop

λ§ˆμŠ€ν„°κ°€ 자체 및 λ…Έλ“œμ—μ„œ μ˜¬λ°”λ₯Έ μ•„μΉ˜ 배포 생성을 μ²˜λ¦¬ν•΄μ•Ό ν•©λ‹ˆκΉŒ?

_ν•΄μ•Ό ν•©λ‹ˆλ‹€_ ν•˜μ§€λ§Œ λ‹€μš΄λ‘œλ“œν•˜λŠ” λ§€λ‹ˆνŽ˜μŠ€νŠΈλŠ” "λš±λš±ν•œ" λ§€λ‹ˆνŽ˜μŠ€νŠΈκ°€ μ•„λ‹™λ‹ˆλ‹€.
https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

λ‚΄κ°€ μ΄ν•΄ν•˜λŠ” ν•œ μ•„μΉ˜ μ˜€μ—Όμ΄ μ „νŒŒλ˜κ³  각 λ…Έλ“œ(?)μ—μ„œ kubectl 둜 μˆ˜μ •ν•΄μ•Ό ν•©λ‹ˆλ‹€.

"λš±λš±ν•œ" λ§€λ‹ˆνŽ˜μŠ€νŠΈκ°€ λ§ˆμŠ€ν„°μ— 있고 여기에 μΆ”κ°€λœ 것 κ°™μŠ΅λ‹ˆλ‹€.
https://github.com/coreos/flannel/commit/c5d10c8b16d7c43bfa6e8408673c6d95404bd807#diff -7891b552b026259e99d479b5e30d31ca

κ΄€λ ¨ 문제/PR:
https://github.com/coreos/flannel/issues/663
https://github.com/coreos/flannel/pull/989

λ‚΄ 가정은 이것이 μ΅œμ²¨λ‹¨μ΄λ©° λ‹€μŒμ„ μ‚¬μš©ν•΄μ•Όν•œλ‹€λŠ” κ²ƒμž…λ‹ˆλ‹€.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

ν΄λŸ¬μŠ€ν„°λ₯Ό μ€‘λ‹¨ν•˜κ³  μ‹œλ„ν•΄ 보고 νš¨κ³Όκ°€ 있기λ₯Ό λ°”λžλ‹ˆλ‹€.
CNI λ¬Έμ„œμ—λŠ” 좩돌이 ν•„μš”ν•˜μ§€λ§Œ flannel-next κ°€ 릴리슀될 λ•Œ λ°œμƒν•΄μ•Ό ν•©λ‹ˆλ‹€.

λͺ¨λ“  65 λŒ“κΈ€

@lukaszgryglicki
amd64 μ•„ν‚€ν…μ²˜μ— 있기 λ•Œλ¬Έμ— λ…Έλ“œκ°€ ν”Œλž€λ„¬μ΄ λ˜μ§€ μ•ŠλŠ” 것 κ°™μŠ΅λ‹ˆλ‹€.

Name:               cncftest.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux

그리고

Name:               devstats.team.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux

λ‚˜λŠ” ν”Œλž€λ„¬μ˜ μ „λ¬Έκ°€λŠ” μ•„λ‹ˆμ§€λ§Œ ν˜Όν•© ν”Œλž«νΌ ν™˜κ²½μ—μ„œ μž‘λ™ν•˜κ²Œ ν•˜λŠ” 방법에 λŒ€ν•΄μ„œλŠ” μ œν’ˆ λ¬Έμ„œλ₯Ό 확인해야 ν•œλ‹€κ³  μƒκ°ν•©λ‹ˆλ‹€.

쒋은 μ§€μ μ΄μ§€λ§Œ 였λ₯˜ λ©”μ‹œμ§€λŠ” μ–΄λ–»μŠ΅λ‹ˆκΉŒ? μ‹€μ œλ‘œ 관련이 μ—†λŠ” 것 κ°™μŠ΅λ‹ˆλ‹€.

runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

/etc/cni/net.d 일뢀 CNI ꡬ성 파일이 λˆ„λ½λœ 것 κ°™μŠ΅λ‹ˆλ‹€. 그런데 κ·Έ μ΄μœ λŠ” λ¬΄μ—‡μž…λ‹ˆκΉŒ?
λ‚˜λŠ” 이제 μ—¬μœ  μ±„λ„μ—μ„œ μ œμ•ˆλœ λ‹€λ₯Έ 도컀 18.03ce λ₯Ό μ‹œλ„ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€(17.03이 μ‹€μ œλ‘œ μ œμ•ˆλ˜μ—ˆμ§€λ§Œ Ubuntu 18.04μ—λŠ” 17.03이 μ—†μŒ).

μ•„μΉ˜ 이름이 μžˆλŠ” λ ˆμ΄λΈ”μ€ μ‹€μ œλ‘œ μΌμΉ˜ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€. κ·ΈλŸ¬λ‚˜ λ‹€μŒ λ ˆμ΄λΈ” beta.kubernetes.io/os=linux 은 3개의 μ„œλ²„ λͺ¨λ‘μ—μ„œ λ™μΌν•©λ‹ˆλ‹€.

Docker 18.03ceμ—μ„œλ„ λ§ˆμ°¬κ°€μ§€μž…λ‹ˆλ‹€. λ‚˜λŠ” μ–΄λ–€ 차이도 보이지 μ•ŠμŠ΅λ‹ˆλ‹€. 이것은 도컀 문제처럼 보이지 μ•ŠμŠ΅λ‹ˆλ‹€. 이것은 일뢀 CNI ꡬ성 문제처럼 λ³΄μž…λ‹ˆλ‹€.

@lukaszgryglicki
μ•ˆλ…•,

λ§ˆμŠ€ν„°: λ² μ–΄λ©”νƒˆ μ„œλ²„ 96μ½”μ–΄, ARM64, 128G RAM, μŠ€μ™‘μ΄ κΊΌμ Έ μžˆμŠ΅λ‹ˆλ‹€.
λ…Έλ“œ(2): λ² μ–΄λ©”νƒˆ μ„œλ²„ 48μ½”μ–΄, AMD64, 256G RAM, μŠ€μ™‘μ΄ 꺼진 x 2.

이것듀은 λͺ‡ 가지 _쒋은_ μ‚¬μ–‘μž…λ‹ˆλ‹€.

λ‚΄κ°€ 물건을 ν…ŒμŠ€νŠΈν•˜λŠ” 방법은 λ‹€μŒκ³Ό κ°™μŠ΅λ‹ˆλ‹€. weavenetμ—μ„œ μž‘λ™ν•˜μ§€ μ•ŠλŠ” 것이 있으면 flannel을 μ‹œλ„ν•˜κ³  κ·Έ λ°˜λŒ€λ‘œ μ‹œλ„ν•©λ‹ˆλ‹€.

λ”°λΌμ„œ weaveλ₯Ό μ‹œλ„ν•˜κ³  CNI 섀정이 μž‘λ™ν•˜λ©΄ CNI ν”ŒλŸ¬κ·ΈμΈκ³Ό κ΄€λ ¨λœ κ²ƒμž…λ‹ˆλ‹€.

kubeadm νŒ€μ€ ν”ŒλŸ¬κ·ΈμΈκ³Ό μ• λ“œμ˜¨μ„ μ§€μ›ν•˜μ§€λ§Œ λͺ¨λ“  것을 μ²˜λ¦¬ν•  λŒ€μ—­ν­μ΄ μ—†κΈ° λ•Œλ¬Έμ— 일반적으둜 문제λ₯Ό ν•΄λ‹Ή κ΄€λ¦¬μžμ—κ²Œ μœ„μž„ν•©λ‹ˆλ‹€.

λ¬Όλ‘ , λ‚˜λŠ” λͺ‡ μ°¨λ‘€ λ°˜λ³΅μ„ μ‹œλ„ν–ˆμŠ΅λ‹ˆλ‹€. μ»¨ν…Œμ΄λ„ˆ μž¬μ‹œμž‘ λ£¨ν”„μ—μ„œ μ’…λ£Œλ˜μ—ˆμŠ΅λ‹ˆλ‹€.
이제 docker 문제λ₯Ό μ œμ™Έν•˜κΈ° μœ„ν•΄ docker 17.03을 μ‹œλ„ν•©λ‹ˆλ‹€(17.03은 맀우 잘 μ§€μ›λ˜μ–΄μ•Ό 함).

λ”°λΌμ„œ 이것은 도컀 λ¬Έμ œκ°€ μ•„λ‹™λ‹ˆλ‹€. 17.03μ—μ„œ 동일:

Aug 02 14:21:51 devstats.cncf.io kubelet[21714]: W0802 14:21:51.406786   21714 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:21:51 devstats.cncf.io kubelet[21714]: E0802 14:21:51.407074   21714 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
now will try weave net as suggested on the issue

λ‚˜λŠ” μ§€κΈˆ 직쑰λ₯Ό μ‹œλ„ν•˜κ³  κ²°κ³Όλ₯Ό 여기에 κ²Œμ‹œν•  κ²ƒμž…λ‹ˆλ‹€.

κ·Έλž˜μ„œ weave net μ‹œλ„ν–ˆμ§€λ§Œ μž‘λ™ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€.
λ§ˆμŠ€ν„°μ—μ„œ: kubectl get nodes :

NAME               STATUS     ROLES     AGE       VERSION
cncftest.io        NotReady   <none>    5s        v1.11.1
devstats.cncf.io   NotReady   <none>    12s       v1.11.1
devstats.team.io   NotReady   master    7m        v1.11.1
  • kubectl describe nodes (λ™μΌν•œ cni κ΄€λ ¨ 였λ₯˜μ΄μ§€λ§Œ ν˜„μž¬ λ§ˆμŠ€ν„° λ…Έλ“œμ—λ„ 있음):
Name:               cncftest.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=cncftest.io
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 14:39:56 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 14:40:58 +0000   Thu, 02 Aug 2018 14:39:56 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.205.79
  Hostname:    cncftest.io
Capacity:
 cpu:                48
 ephemeral-storage:  459266000Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             264047752Ki
 pods:               110
Allocatable:
 cpu:                48
 ephemeral-storage:  423259544900
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             263945352Ki
 pods:               110
System Info:
 Machine ID:                 d1c2fc94ee6d41ca967c4d43504af50c
 System UUID:                4C4C4544-0052-3310-804A-B7C04F4E4432
 Boot ID:                    d87670d9-251e-42a5-90c5-5d63059f03ab
 Kernel Version:             4.15.0-22-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.3.2
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
Non-terminated Pods:         (1 in total)
  Namespace                  Name               CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----               ------------  ----------  ---------------  -------------
  kube-system                weave-net-wwjrr    20m (0%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       20m (0%)  0 (0%)
  memory    0 (0%)    0 (0%)
Events:
  Type    Reason                   Age              From                  Message
  ----    ------                   ----             ----                  -------
  Normal  Starting                 1m               kubelet, cncftest.io  Starting kubelet.
  Normal  NodeHasSufficientDisk    1m (x2 over 1m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  1m (x2 over 1m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    1m (x2 over 1m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     1m (x2 over 1m)  kubelet, cncftest.io  Node cncftest.io status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  1m               kubelet, cncftest.io  Updated Node Allocatable limit across pods


Name:               devstats.cncf.io
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=devstats.cncf.io
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 14:39:49 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 14:40:59 +0000   Thu, 02 Aug 2018 14:39:49 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.78.47
  Hostname:    devstats.cncf.io
Capacity:
 cpu:                48
 ephemeral-storage:  142124052Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             264027220Ki
 pods:               110
Allocatable:
 cpu:                48
 ephemeral-storage:  130981526107
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             263924820Ki
 pods:               110
System Info:
 Machine ID:                 d1c2fc94ee6d41ca967c4d43504af50c
 System UUID:                00000000-0000-0000-0000-0CC47AF37CF2
 Boot ID:                    f257b606-5da2-43fd-8782-0aa4484037f4
 Kernel Version:             4.15.0-20-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://17.3.2
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
Non-terminated Pods:         (1 in total)
  Namespace                  Name               CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----               ------------  ----------  ---------------  -------------
  kube-system                weave-net-2fsrf    20m (0%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests  Limits
  --------  --------  ------
  cpu       20m (0%)  0 (0%)
  memory    0 (0%)    0 (0%)
Events:
  Type    Reason                   Age   From                       Message
  ----    ------                   ----  ----                       -------
  Normal  Starting                 1m    kubelet, devstats.cncf.io  Starting kubelet.
  Normal  NodeHasSufficientDisk    1m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  1m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    1m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     1m    kubelet, devstats.cncf.io  Node devstats.cncf.io status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  1m    kubelet, devstats.cncf.io  Updated Node Allocatable limit across pods


Name:               devstats.team.io
Roles:              master
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=devstats.team.io
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Thu, 02 Aug 2018 14:32:14 +0000
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Thu, 02 Aug 2018 14:40:56 +0000   Thu, 02 Aug 2018 14:32:07 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  147.75.97.234
  Hostname:    devstats.team.io
Capacity:
 cpu:                96
 ephemeral-storage:  322988584Ki
 hugepages-2Mi:      0
 memory:             131731468Ki
 pods:               110
Allocatable:
 cpu:                96
 ephemeral-storage:  297666278522
 hugepages-2Mi:      0
 memory:             131629068Ki
 pods:               110
System Info:
 Machine ID:                 5eaa89a81ff348399284bb4cb016ffd7
 System UUID:                10000000-FAC5-FFFF-A81D-FC15B4970493
 Boot ID:                    43b920e3-34e7-4de3-aa6c-8b5c525363ff
 Kernel Version:             4.15.0-20-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               arm64
 Container Runtime Version:  docker://17.9.0
 Kubelet Version:            v1.11.1
 Kube-Proxy Version:         v1.11.1
Non-terminated Pods:         (6 in total)
  Namespace                  Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                        ------------  ----------  ---------------  -------------
  kube-system                etcd-devstats.team.io                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-devstats.team.io             250m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-devstats.team.io    200m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-proxy-69qnb                            0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-devstats.team.io             100m (0%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                weave-net-j9f5m                             20m (0%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests   Limits
  --------  --------   ------
  cpu       570m (0%)  0 (0%)
  memory    0 (0%)     0 (0%)
Events:
  Type    Reason                   Age                From                          Message
  ----    ------                   ----               ----                          -------
  Normal  Starting                 10m                kubelet, devstats.team.io     Starting kubelet.
  Normal  NodeAllocatableEnforced  10m                kubelet, devstats.team.io     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientPID     10m (x5 over 10m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientPID
  Normal  NodeHasSufficientDisk    10m (x6 over 10m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  10m (x6 over 10m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    10m (x6 over 10m)  kubelet, devstats.team.io     Node devstats.team.io status is now: NodeHasNoDiskPressure
  Normal  Starting                 8m                 kube-proxy, devstats.team.io  Starting kube-proxy.
  • journalctl -xe λ§ˆμŠ€ν„°:
Aug 02 14:42:18 devstats.team.io dockerd[44020]: time="2018-08-02T14:42:18.330999189Z" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD') in container 283a9d0e00a9b6f336dd4d2f7fc5bddaec67751726b18e353bcf3081787395cb"
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.079835   56340 kuberuntime_manager.go:513] Container {Name:weave Image:weaveworks/weave-kube:2.4.0 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.080312   56340 kuberuntime_manager.go:757] checking backoff for container "weave" in pod "weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"
Aug 02 14:42:19 devstats.team.io kubelet[56340]: I0802 14:42:19.080677   56340 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=weave pod=weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)
Aug 02 14:42:19 devstats.team.io kubelet[56340]: E0802 14:42:19.080815   56340 pod_workers.go:186] Error syncing pod 66ecc3ba-9661-11e8-8ca9-fc15b4970491 ("weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"), skipping: failed to "StartContainer
Aug 02 14:42:21 devstats.team.io kubelet[56340]: W0802 14:42:21.867690   56340 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:42:21 devstats.team.io kubelet[56340]: E0802 14:42:21.868005   56340 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.259681   56340 kuberuntime_manager.go:513] Container {Name:weave Image:weaveworks/weave-kube:2.4.0 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.260359   56340 kuberuntime_manager.go:757] checking backoff for container "weave" in pod "weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"
Aug 02 14:42:26 devstats.team.io kubelet[56340]: I0802 14:42:26.260833   56340 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=weave pod=weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)
Aug 02 14:42:26 devstats.team.io kubelet[56340]: E0802 14:42:26.260984   56340 pod_workers.go:186] Error syncing pod 66ecc3ba-9661-11e8-8ca9-fc15b4970491 ("weave-net-j9f5m_kube-system(66ecc3ba-9661-11e8-8ca9-fc15b4970491)"), skipping: failed to "StartContainer
Aug 02 14:42:26 devstats.team.io kubelet[56340]: W0802 14:42:26.870675   56340 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 14:42:26 devstats.team.io kubelet[56340]: E0802 14:42:26.871316   56340 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
  • kubectl get po --all-namespaces :
NAMESPACE     NAME                                       READY     STATUS             RESTARTS   AGE
kube-system   coredns-78fcdf6894-g8wzs                   0/1       Pending            0          12m
kube-system   coredns-78fcdf6894-tzs8n                   0/1       Pending            0          12m
kube-system   etcd-devstats.team.io                      1/1       Running            0          12m
kube-system   kube-apiserver-devstats.team.io            1/1       Running            0          12m
kube-system   kube-controller-manager-devstats.team.io   1/1       Running            0          12m
kube-system   kube-proxy-69qnb                           1/1       Running            0          12m
kube-system   kube-scheduler-devstats.team.io            1/1       Running            0          12m
kube-system   weave-net-2fsrf                            1/2       CrashLoopBackOff   5          5m
kube-system   weave-net-j9f5m                            1/2       CrashLoopBackOff   6          8m
kube-system   weave-net-wwjrr                            1/2       CrashLoopBackOff   5          4m
  • kubectl describe po --all-namespaces :
Name:               coredns-78fcdf6894-g8wzs
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             k8s-app=kube-dns
                    pod-template-hash=3497892450
Annotations:        <none>
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/coredns-78fcdf6894
Containers:
  coredns:
    Image:       k8s.gcr.io/coredns:1.1.3
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-jw4mv (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-jw4mv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-jw4mv
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  8m (x32 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) were not ready.
  Warning  FailedScheduling  3m (x48 over 5m)   default-scheduler  0/3 nodes are available: 3 node(s) were not ready.


Name:               coredns-78fcdf6894-tzs8n
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             k8s-app=kube-dns
                    pod-template-hash=3497892450
Annotations:        <none>
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/coredns-78fcdf6894
Containers:
  coredns:
    Image:       k8s.gcr.io/coredns:1.1.3
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-jw4mv (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-jw4mv:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-jw4mv
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  8m (x32 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) were not ready.
  Warning  FailedScheduling  3m (x47 over 5m)   default-scheduler  0/3 nodes are available: 3 node(s) were not ready.


Name:               etcd-devstats.team.io
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:31:13 +0000
Labels:             component=etcd
                    tier=control-plane
Annotations:        kubernetes.io/config.hash=cc73514fbc25558d566fe49661f006a0
                    kubernetes.io/config.mirror=cc73514fbc25558d566fe49661f006a0
                    kubernetes.io/config.seen=2018-08-02T14:31:13.654147902Z
                    kubernetes.io/config.source=file
                    scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Containers:
  etcd:
    Container ID:  docker://254c88b154393778ef7b1ead2aaaa0acb120ffb76d911f140172da3323f1f1e3
    Image:         k8s.gcr.io/etcd-arm64:3.2.18
    Image ID:      docker-pullable://k8s.gcr.io/etcd-arm64<strong i="13">@sha256</strong>:f0b7368ebb28e6226ab3b4dbce4b5c6d77dab7b5f6579b08fd645c00f7b100ff
    Port:          <none>
    Host Port:     <none>
    Command:
      etcd
      --advertise-client-urls=https://127.0.0.1:2379
      --cert-file=/etc/kubernetes/pki/etcd/server.crt
      --client-cert-auth=true
      --data-dir=/var/lib/etcd
      --initial-advertise-peer-urls=https://127.0.0.1:2380
      --initial-cluster=devstats.team.io=https://127.0.0.1:2380
      --key-file=/etc/kubernetes/pki/etcd/server.key
      --listen-client-urls=https://127.0.0.1:2379
      --listen-peer-urls=https://127.0.0.1:2380
      --name=devstats.team.io
      --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
      --peer-client-cert-auth=true
      --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
      --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
      --snapshot-count=10000
      --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    State:          Running
      Started:      Thu, 02 Aug 2018 14:31:15 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       exec [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo] delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:    <none>
    Mounts:
      /etc/kubernetes/pki/etcd from etcd-certs (rw)
      /var/lib/etcd from etcd-data (rw)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  etcd-data:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/etcd
    HostPathType:  DirectoryOrCreate
  etcd-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki/etcd
    HostPathType:  DirectoryOrCreate
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>


Name:               kube-apiserver-devstats.team.io
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:31:13 +0000
Labels:             component=kube-apiserver
                    tier=control-plane
Annotations:        kubernetes.io/config.hash=1f7835a47425009200d38bf94c337ab3
                    kubernetes.io/config.mirror=1f7835a47425009200d38bf94c337ab3
                    kubernetes.io/config.seen=2018-08-02T14:31:13.639443247Z
                    kubernetes.io/config.source=file
                    scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Containers:
  kube-apiserver:
    Container ID:  docker://22b73993b141faebe6b4aab727d2235abb3422a17b60bc1be6c749c260e39f67
    Image:         k8s.gcr.io/kube-apiserver-arm64:v1.11.1
    Image ID:      docker-pullable://k8s.gcr.io/kube-apiserver-arm64<strong i="14">@sha256</strong>:bca1933fa25fc7f890700f6aebd572c6f8351f7bc89d2e4f2c44a63649e3fccf
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-apiserver
      --authorization-mode=Node,RBAC
      --advertise-address=147.75.97.234
      --allow-privileged=true
      --client-ca-file=/etc/kubernetes/pki/ca.crt
      --disable-admission-plugins=PersistentVolumeLabel
      --enable-admission-plugins=NodeRestriction
      --enable-bootstrap-token-auth=true
      --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
      --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
      --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
      --etcd-servers=https://127.0.0.1:2379
      --insecure-port=0
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
      --requestheader-allowed-names=front-proxy-client
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
      --requestheader-extra-headers-prefix=X-Remote-Extra-
      --requestheader-group-headers=X-Remote-Group
      --requestheader-username-headers=X-Remote-User
      --secure-port=6443
      --service-account-key-file=/etc/kubernetes/pki/sa.pub
      --service-cluster-ip-range=10.96.0.0/12
      --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
      --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    State:          Running
      Started:      Thu, 02 Aug 2018 14:31:15 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        250m
    Liveness:     http-get https://147.75.97.234:6443/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/kubernetes/pki from k8s-certs (ro)
      /etc/ssl/certs from ca-certs (ro)
      /usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
      /usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki
    HostPathType:  DirectoryOrCreate
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  usr-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  usr-local-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>


Name:               kube-controller-manager-devstats.team.io
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:31:13 +0000
Labels:             component=kube-controller-manager
                    tier=control-plane
Annotations:        kubernetes.io/config.hash=5d26a7fba3c17c9fa8969a466d6a0f1d
                    kubernetes.io/config.mirror=5d26a7fba3c17c9fa8969a466d6a0f1d
                    kubernetes.io/config.seen=2018-08-02T14:31:13.646000889Z
                    kubernetes.io/config.source=file
                    scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Containers:
  kube-controller-manager:
    Container ID:  docker://5182bf5c7c63f9507e6319a2c3fb5698dc827ea9b591acbb071cb39c4ea445ea
    Image:         k8s.gcr.io/kube-controller-manager-arm64:v1.11.1
    Image ID:      docker-pullable://k8s.gcr.io/kube-controller-manager-arm64<strong i="15">@sha256</strong>:7fa0b0242c13fcaa63bff3b4cde32d30ce18422505afa8cb4c0f19755148b612
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-controller-manager
      --address=127.0.0.1
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
      --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
      --controllers=*,bootstrapsigner,tokencleaner
      --kubeconfig=/etc/kubernetes/controller-manager.conf
      --leader-elect=true
      --root-ca-file=/etc/kubernetes/pki/ca.crt
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key
      --use-service-account-credentials=true
    State:          Running
      Started:      Thu, 02 Aug 2018 14:31:15 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        200m
    Liveness:     http-get http://127.0.0.1:10252/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/ca-certificates from etc-ca-certificates (ro)
      /etc/kubernetes/controller-manager.conf from kubeconfig (ro)
      /etc/kubernetes/pki from k8s-certs (ro)
      /etc/ssl/certs from ca-certs (ro)
      /usr/libexec/kubernetes/kubelet-plugins/volume/exec from flexvolume-dir (rw)
      /usr/local/share/ca-certificates from usr-local-share-ca-certificates (ro)
      /usr/share/ca-certificates from usr-share-ca-certificates (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  usr-local-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local/share/ca-certificates
    HostPathType:  DirectoryOrCreate
  etc-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ca-certificates
    HostPathType:  DirectoryOrCreate
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/pki
    HostPathType:  DirectoryOrCreate
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/controller-manager.conf
    HostPathType:  FileOrCreate
  flexvolume-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/libexec/kubernetes/kubelet-plugins/volume/exec
    HostPathType:  DirectoryOrCreate
  usr-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/ca-certificates
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>


Name:               kube-proxy-69qnb
Namespace:          kube-system
Priority:           2000001000
PriorityClassName:  system-node-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:32:25 +0000
Labels:             controller-revision-hash=2718475167
                    k8s-app=kube-proxy
                    pod-template-generation=1
Annotations:        scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Controlled By:      DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:  docker://12fb2a4a8af025604e46783aa87d084bdc681365317c8dac278a583646a8ad1c
    Image:         k8s.gcr.io/kube-proxy-arm64:v1.11.1
    Image ID:      docker-pullable://k8s.gcr.io/kube-proxy-arm64<strong i="16">@sha256</strong>:c61f4e126ec75dedce3533771c67eb7c1266cacaac9ae770e045a9bec9c9dc32
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
    State:          Running
      Started:      Thu, 02 Aug 2018 14:32:26 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-proxy-token-4q6rl (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  kube-proxy-token-4q6rl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-proxy-token-4q6rl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  beta.kubernetes.io/arch=arm64
Tolerations:     
                 CriticalAddonsOnly
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type    Reason   Age   From                       Message
  ----    ------   ----  ----                       -------
  Normal  Pulled   13m   kubelet, devstats.team.io  Container image "k8s.gcr.io/kube-proxy-arm64:v1.11.1" already present on machine
  Normal  Created  13m   kubelet, devstats.team.io  Created container
  Normal  Started  13m   kubelet, devstats.team.io  Started container


Name:               kube-scheduler-devstats.team.io
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:31:13 +0000
Labels:             component=kube-scheduler
                    tier=control-plane
Annotations:        kubernetes.io/config.hash=6e1c1eb822c75df4cec74cac9992eea9
                    kubernetes.io/config.mirror=6e1c1eb822c75df4cec74cac9992eea9
                    kubernetes.io/config.seen=2018-08-02T14:31:13.651239565Z
                    kubernetes.io/config.source=file
                    scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 147.75.97.234
Containers:
  kube-scheduler:
    Container ID:  docker://0b8018a7d0c2cb2dc64d9364dea5cea8047b0688c4ecb287dba8bebf9ab011a3
    Image:         k8s.gcr.io/kube-scheduler-arm64:v1.11.1
    Image ID:      docker-pullable://k8s.gcr.io/kube-scheduler-arm64<strong i="17">@sha256</strong>:28ab99ab78c7945a4e20d9369682e626b671ba49e2d4101b1754019effde10d2
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-scheduler
      --address=127.0.0.1
      --kubeconfig=/etc/kubernetes/scheduler.conf
      --leader-elect=true
    State:          Running
      Started:      Thu, 02 Aug 2018 14:31:14 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Liveness:     http-get http://127.0.0.1:10251/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/kubernetes/scheduler.conf from kubeconfig (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/scheduler.conf
    HostPathType:  FileOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>


Name:               weave-net-2fsrf
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               devstats.cncf.io/147.75.78.47
Start Time:         Thu, 02 Aug 2018 14:39:49 +0000
Labels:             controller-revision-hash=332195524
                    name=weave-net
                    pod-template-generation=1
Annotations:        <none>
Status:             Running
IP:                 147.75.78.47
Controlled By:      DaemonSet/weave-net
Containers:
  weave:
    Container ID:  docker://e8f5c3b702166a15212ab9576696aa7a1a0cb5b94e9cba1451fc9cc2b1d1382d
    Image:         weaveworks/weave-kube:2.4.0
    Image ID:      docker-pullable://weaveworks/weave-kube<strong i="18">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
    Port:          <none>
    Host Port:     <none>
    Command:
      /home/weave/launch.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 02 Aug 2018 14:43:04 +0000
      Finished:     Thu, 02 Aug 2018 14:43:05 +0000
    Ready:          False
    Restart Count:  5
    Requests:
      cpu:     10m
    Liveness:  http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /host/etc from cni-conf (rw)
      /host/home from cni-bin2 (rw)
      /host/opt from cni-bin (rw)
      /host/var/lib/dbus from dbus (rw)
      /lib/modules from lib-modules (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
      /weavedb from weavedb (rw)
  weave-npc:
    Container ID:   docker://1cfd16507d6d9e1744bfc354af62301fb8678af12ace34113121a40ca93b6113
    Image:          weaveworks/weave-npc:2.4.0
    Image ID:       docker-pullable://weaveworks/weave-npc<strong i="19">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 02 Aug 2018 14:39:58 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  10m
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  weavedb:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/weave
    HostPathType:  
  cni-bin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt
    HostPathType:  
  cni-bin2:
    Type:          HostPath (bare host directory volume)
    Path:          /home
    HostPathType:  
  cni-conf:
    Type:          HostPath (bare host directory volume)
    Path:          /etc
    HostPathType:  
  dbus:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/dbus
    HostPathType:  
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  weave-net-token-blz79:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  weave-net-token-blz79
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type     Reason   Age                From                       Message
  ----     ------   ----               ----                       -------
  Normal   Pulling  5m                 kubelet, devstats.cncf.io  pulling image "weaveworks/weave-kube:2.4.0"
  Normal   Pulled   5m                 kubelet, devstats.cncf.io  Successfully pulled image "weaveworks/weave-kube:2.4.0"
  Normal   Pulling  5m                 kubelet, devstats.cncf.io  pulling image "weaveworks/weave-npc:2.4.0"
  Normal   Pulled   5m                 kubelet, devstats.cncf.io  Successfully pulled image "weaveworks/weave-npc:2.4.0"
  Normal   Created  5m                 kubelet, devstats.cncf.io  Created container
  Normal   Started  5m                 kubelet, devstats.cncf.io  Started container
  Normal   Created  5m (x4 over 5m)    kubelet, devstats.cncf.io  Created container
  Normal   Started  5m (x4 over 5m)    kubelet, devstats.cncf.io  Started container
  Normal   Pulled   5m (x3 over 5m)    kubelet, devstats.cncf.io  Container image "weaveworks/weave-kube:2.4.0" already present on machine
  Warning  BackOff  56s (x27 over 5m)  kubelet, devstats.cncf.io  Back-off restarting failed container


Name:               weave-net-j9f5m
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               devstats.team.io/147.75.97.234
Start Time:         Thu, 02 Aug 2018 14:36:11 +0000
Labels:             controller-revision-hash=332195524
                    name=weave-net
                    pod-template-generation=1
Annotations:        <none>
Status:             Running
IP:                 147.75.97.234
Controlled By:      DaemonSet/weave-net
Containers:
  weave:
    Container ID:  docker://283a9d0e00a9b6f336dd4d2f7fc5bddaec67751726b18e353bcf3081787395cb
    Image:         weaveworks/weave-kube:2.4.0
    Image ID:      docker-pullable://weaveworks/weave-kube<strong i="20">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
    Port:          <none>
    Host Port:     <none>
    Command:
      /home/weave/launch.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 02 Aug 2018 14:42:18 +0000
      Finished:     Thu, 02 Aug 2018 14:42:18 +0000
    Ready:          False
    Restart Count:  6
    Requests:
      cpu:     10m
    Liveness:  http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /host/etc from cni-conf (rw)
      /host/home from cni-bin2 (rw)
      /host/opt from cni-bin (rw)
      /host/var/lib/dbus from dbus (rw)
      /lib/modules from lib-modules (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
      /weavedb from weavedb (rw)
  weave-npc:
    Container ID:   docker://3cd49dbca669ac83db95ebf943ed0053281fa5082f7fa403a56e30091eaec36b
    Image:          weaveworks/weave-npc:2.4.0
    Image ID:       docker-pullable://weaveworks/weave-npc<strong i="21">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 02 Aug 2018 14:36:31 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  10m
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  weavedb:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/weave
    HostPathType:  
  cni-bin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt
    HostPathType:  
  cni-bin2:
    Type:          HostPath (bare host directory volume)
    Path:          /home
    HostPathType:  
  cni-conf:
    Type:          HostPath (bare host directory volume)
    Path:          /etc
    HostPathType:  
  dbus:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/dbus
    HostPathType:  
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  weave-net-token-blz79:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  weave-net-token-blz79
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type     Reason   Age               From                       Message
  ----     ------   ----              ----                       -------
  Normal   Pulling  9m                kubelet, devstats.team.io  pulling image "weaveworks/weave-kube:2.4.0"
  Normal   Pulled   9m                kubelet, devstats.team.io  Successfully pulled image "weaveworks/weave-kube:2.4.0"
  Normal   Pulling  9m                kubelet, devstats.team.io  pulling image "weaveworks/weave-npc:2.4.0"
  Normal   Pulled   9m                kubelet, devstats.team.io  Successfully pulled image "weaveworks/weave-npc:2.4.0"
  Normal   Created  9m                kubelet, devstats.team.io  Created container
  Normal   Started  9m                kubelet, devstats.team.io  Started container
  Normal   Created  8m (x4 over 9m)   kubelet, devstats.team.io  Created container
  Normal   Started  8m (x4 over 9m)   kubelet, devstats.team.io  Started container
  Normal   Pulled   8m (x3 over 9m)   kubelet, devstats.team.io  Container image "weaveworks/weave-kube:2.4.0" already present on machine
  Warning  BackOff  4m (x26 over 9m)  kubelet, devstats.team.io  Back-off restarting failed container


Name:               weave-net-wwjrr
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               cncftest.io/147.75.205.79
Start Time:         Thu, 02 Aug 2018 14:39:57 +0000
Labels:             controller-revision-hash=332195524
                    name=weave-net
                    pod-template-generation=1
Annotations:        <none>
Status:             Running
IP:                 147.75.205.79
Controlled By:      DaemonSet/weave-net
Containers:
  weave:
    Container ID:  docker://d0d1dccfe0a1f57bce652e30d5df210a9b232dd71fe6be1340c8bd5617e1ce11
    Image:         weaveworks/weave-kube:2.4.0
    Image ID:      docker-pullable://weaveworks/weave-kube<strong i="22">@sha256</strong>:3c45b339ab2dc9c11c9c745e44afce27806dc1d8ecd1da84a88deb36756ac713
    Port:          <none>
    Host Port:     <none>
    Command:
      /home/weave/launch.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 02 Aug 2018 14:43:16 +0000
      Finished:     Thu, 02 Aug 2018 14:43:16 +0000
    Ready:          False
    Restart Count:  5
    Requests:
      cpu:     10m
    Liveness:  http-get http://127.0.0.1:6784/status delay=30s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /host/etc from cni-conf (rw)
      /host/home from cni-bin2 (rw)
      /host/opt from cni-bin (rw)
      /host/var/lib/dbus from dbus (rw)
      /lib/modules from lib-modules (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
      /weavedb from weavedb (rw)
  weave-npc:
    Container ID:   docker://e2c15578719788110131a4be3653a077441338b0f61f731add9dadaadfc11655
    Image:          weaveworks/weave-npc:2.4.0
    Image ID:       docker-pullable://weaveworks/weave-npc<strong i="23">@sha256</strong>:715b03e14874355f1f793f7bc11d843a00b390b2806bd996f1e47e8acb1020aa
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 02 Aug 2018 14:40:09 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  10m
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-blz79 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  weavedb:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/weave
    HostPathType:  
  cni-bin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt
    HostPathType:  
  cni-bin2:
    Type:          HostPath (bare host directory volume)
    Path:          /home
    HostPathType:  
  cni-conf:
    Type:          HostPath (bare host directory volume)
    Path:          /etc
    HostPathType:  
  dbus:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/dbus
    HostPathType:  
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  weave-net-token-blz79:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  weave-net-token-blz79
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type     Reason   Age                From                  Message
  ----     ------   ----               ----                  -------
  Normal   Pulling  5m                 kubelet, cncftest.io  pulling image "weaveworks/weave-kube:2.4.0"
  Normal   Pulled   5m                 kubelet, cncftest.io  Successfully pulled image "weaveworks/weave-kube:2.4.0"
  Normal   Pulling  5m                 kubelet, cncftest.io  pulling image "weaveworks/weave-npc:2.4.0"
  Normal   Pulled   5m                 kubelet, cncftest.io  Successfully pulled image "weaveworks/weave-npc:2.4.0"
  Normal   Created  5m                 kubelet, cncftest.io  Created container
  Normal   Started  5m                 kubelet, cncftest.io  Started container
  Normal   Created  4m (x4 over 5m)    kubelet, cncftest.io  Created container
  Normal   Pulled   4m (x3 over 5m)    kubelet, cncftest.io  Container image "weaveworks/weave-kube:2.4.0" already present on machine
  Normal   Started  4m (x4 over 5m)    kubelet, cncftest.io  Started container
  Warning  BackOff  44s (x27 over 5m)  kubelet, cncftest.io  Back-off restarting failed container
  • kubectl --v=8 logs --namespace=kube-system weave-net-2fsrf --all-containers=true :
I0802 14:49:02.034473   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.036654   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.044546   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.062906   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.063710   64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf
I0802 14:49:02.063753   64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.063791   64396 round_trippers.go:393]     Accept: application/json, */*
I0802 14:49:02.063828   64396 round_trippers.go:393]     User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.236764   64396 round_trippers.go:408] Response Status: 200 OK in 172 milliseconds
I0802 14:49:02.236870   64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.236907   64396 round_trippers.go:414]     Content-Type: application/json
I0802 14:49:02.236944   64396 round_trippers.go:414]     Date: Thu, 02 Aug 2018 14:49:02 GMT
I0802 14:49:02.237363   64396 request.go:897] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"weave-net-2fsrf","generateName":"weave-net-","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/weave-net-2fsrf","uid":"e8b2dfe9-9661-11e8-8ca9-fc15b4970491","resourceVersion":"1625","creationTimestamp":"2018-08-02T14:39:49Z","labels":{"controller-revision-hash":"332195524","name":"weave-net","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"weave-net","uid":"66e82a46-9661-11e8-8ca9-fc15b4970491","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"weavedb","hostPath":{"path":"/var/lib/weave","type":""}},{"name":"cni-bin","hostPath":{"path":"/opt","type":""}},{"name":"cni-bin2","hostPath":{"path":"/home","type":""}},{"name":"cni-conf","hostPath":{"path":"/etc","type":""}},{"name":"dbus","hostPath":{"path":"/var/lib/dbus","type":""}},{"name":"lib-modules","hostPath":{"path":"/lib/modules","type":""}},{"name":"xtables-lock","hostPath":{"path":"/run/xtables.lock","ty [truncated 4212 chars]
I0802 14:49:02.261076   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.262803   64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf/log?container=weave
I0802 14:49:02.262844   64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.262882   64396 round_trippers.go:393]     Accept: application/json, */*
I0802 14:49:02.262919   64396 round_trippers.go:393]     User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.275703   64396 round_trippers.go:408] Response Status: 200 OK in 12 milliseconds
I0802 14:49:02.275743   64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.275779   64396 round_trippers.go:414]     Content-Type: text/plain
I0802 14:49:02.275815   64396 round_trippers.go:414]     Content-Length: 69
I0802 14:49:02.275850   64396 round_trippers.go:414]     Date: Thu, 02 Aug 2018 14:49:02 GMT
Network 10.32.0.0/12 overlaps with existing route 10.0.0.0/8 on host
I0802 14:49:02.278054   64396 loader.go:359] Config loaded from file /home/kube/.kube/config
I0802 14:49:02.279649   64396 round_trippers.go:383] GET https://147.75.97.234:6443/api/v1/namespaces/kube-system/pods/weave-net-2fsrf/log?container=weave-npc
I0802 14:49:02.279691   64396 round_trippers.go:390] Request Headers:
I0802 14:49:02.279728   64396 round_trippers.go:393]     Accept: application/json, */*
I0802 14:49:02.279765   64396 round_trippers.go:393]     User-Agent: kubectl/v1.11.1 (linux/arm64) kubernetes/b1b2997
I0802 14:49:02.293271   64396 round_trippers.go:408] Response Status: 200 OK in 13 milliseconds
I0802 14:49:02.293321   64396 round_trippers.go:411] Response Headers:
I0802 14:49:02.293358   64396 round_trippers.go:414]     Content-Type: text/plain
I0802 14:49:02.293394   64396 round_trippers.go:414]     Date: Thu, 02 Aug 2018 14:49:02 GMT
INFO: 2018/08/02 14:39:58.198716 Starting Weaveworks NPC 2.4.0; node name "devstats.cncf.io"
INFO: 2018/08/02 14:39:58.198969 Serving /metrics on :6781
Thu Aug  2 14:39:58 2018 <5> ulogd.c:843 building new pluginstance stack: 'log1:NFLOG,base1:BASE,pcap1:PCAP'
DEBU: 2018/08/02 14:39:58.294002 Got list of ipsets: []
ERROR: logging before flag.Parse: E0802 14:40:28.338474   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:28.338475   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:28.338474   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.339275   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.340235   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:40:59.341457   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.340117   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.341216   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:41:30.342131   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.342657   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.343322   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:01.343396   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.343714   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.344561   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:42:32.346722   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.344468   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.345385   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:03.347275   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.345226   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.346184   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:43:34.347875   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.347016   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.347523   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:05.350821   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.347826   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.348883   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:44:36.351365   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.348662   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.349573   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:07.352012   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.349429   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.350420   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:45:38.352714   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.351213   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.352074   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:09.355261   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.352128   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.352949   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:46:40.355929   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.352903   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.353844   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:11.356576   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.353994   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.354564   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:47:42.357281   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.355515   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.356603   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:13.359533   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.356372   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:317: Failed to list *v1.NetworkPolicy: Get https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.357453   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:316: Failed to list *v1.Pod: Get https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
ERROR: logging before flag.Parse: E0802 14:48:44.360401   30018 reflector.go:205] github.com/weaveworks/weave/prog/weave-npc/main.go:315: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

μš”μ•½ν•˜μžλ©΄. Ubuntu 18.04μ—μ„œ 단일 λ§ˆμŠ€ν„° 및 단일 μž‘μ—…μž λ…Έλ“œλ§ŒμœΌλ‘œ Kubernetes ν΄λŸ¬μŠ€ν„°λ₯Ό μ„€μΉ˜ν•˜λŠ” 것은 λΆˆκ°€λŠ₯ν•©λ‹ˆλ‹€.
μ΅œμ‹  LTS Ubuntuμ—μ„œ kubeadm을 μ‚¬μš©ν•˜μ—¬ λ‹¨κ³„λ³„λ‘œ k8sλ₯Ό μ„€μ •ν•˜λŠ” 방법에 λŒ€ν•œ μ„€μΉ˜ 지침이 μžˆμ–΄μ•Ό ν•œλ‹€κ³  μƒκ°ν•©λ‹ˆλ‹€.

λ‚˜λŠ” 18.04κ°€ λ²ˆλ“€λ‘œ μ œκ³΅λ˜λŠ” Docker와 systemd-resolved λ©΄μ—μ„œ λͺ¨λ‘ μ€‘λ‹¨λ˜μ—ˆλ‹€κ³  μƒκ°ν•©λ‹ˆλ‹€.
κ·Έλž˜μ„œ 예, 거기에 μžˆλŠ” λͺ¨λ“  단일 λ°°ν¬νŒμ— λŒ€ν•œ κ°€μ΄λ“œλ₯Ό μž‘μ„±ν•˜λŠ” 것이 정말 μ–΄λ ΅κ³  효율적으둜 μœ μ§€ 관리할 수 μ—†μŠ΅λ‹ˆλ‹€.

λ˜ν•œ kubeadm이 μ—¬κΈ°μ„œ ν”„λ‘ νŠΈμ—”λ“œμ΄μ§€λ§Œ 이 λ¬Έμ œλŠ” μ‹€μ œλ‘œ kubeadm μžμ²΄μ™€ 관련이 없을 수 μžˆμŠ΅λ‹ˆλ‹€.

λͺ‡ 가지 질문:

  • μ΅œμ‹  kubernetes λ²„μ „μœΌλ‘œ amd64 + arm64 ν΄λŸ¬μŠ€ν„°λ₯Ό μ„±κ³΅μ μœΌλ‘œ μ‹€ν–‰ν•˜μ…¨μŠ΅λ‹ˆκΉŒ?
  • 이것이 ν”„λ‘μ‹œ λ¬Έμ œμΈμ§€ κΆκΈˆν•©λ‹ˆλ‹€. λ…Έλ“œκ°€ ν”„λ‘μ‹œ 뒀에 μžˆμŠ΅λ‹ˆκΉŒ?
  • 3개의 λ…Έλ“œμ—μ„œ kubeadm join/init λ₯Ό μ‹œμž‘ν•  λ•Œ /var/lib/kubelet/kubeadm-flags.env 의 λ‚΄μš©μ€ λ¬΄μ—‡μž…λ‹ˆκΉŒ?
  • journalctl -xeu kubelet ν₯미둜운 μ½˜ν…μΈ λŠ” μ΄κ²ƒλΏμΈκ°€μš”? λ§ˆμŠ€ν„° λ…Έλ“œμ—λ§Œ ν•΄λ‹Ήλ©λ‹ˆλ‹€. λ‹€λ₯Έ λ…Έλ“œλŠ” μ–΄λ–»μŠ΅λ‹ˆκΉŒ? github μš”μ μ΄λ‚˜ http://pastebin.com μ—μ„œ 이것듀을 버릴 수 μžˆμŠ΅λ‹ˆλ‹€.
  • μ΅œμ‹  kubernetes λ²„μ „μœΌλ‘œ amd64 + arm64 ν΄λŸ¬μŠ€ν„°λ₯Ό μ„±κ³΅μ μœΌλ‘œ μ‹€ν–‰ν•˜μ…¨μŠ΅λ‹ˆκΉŒ? μ•„λ‹ˆμš”, 이것은 첫 번째 μ‹œλ„μ΄μ§€λ§Œ arm64 κ΄€λ ¨ 문제λ₯Ό μ œμ™Έν•˜κΈ° μœ„ν•΄ amd64 ν˜ΈμŠ€νŠΈμ— λ§ˆμŠ€ν„°λ₯Ό μ„€μΉ˜ν•˜κ³  λ‹€λ₯Έ amd64 ν˜ΈμŠ€νŠΈκ°€ μžˆλŠ” 단일 λ…Έλ“œλ„ μ„€μΉ˜ν•˜λ €κ³  ν•©λ‹ˆλ‹€.
  • 이것이 ν”„λ‘μ‹œ λ¬Έμ œμΈμ§€ κΆκΈˆν•©λ‹ˆλ‹€. λ…Έλ“œκ°€ ν”„λ‘μ‹œ 뒀에 μžˆμŠ΅λ‹ˆκΉŒ? ν”„λ‘μ‹œκ°€ μ „ν˜€ μ—†μŒ, 3개의 μ„œλ²„ λͺ¨λ‘μ— κ³ μ • IPκ°€ μžˆμŠ΅λ‹ˆλ‹€.
  • 3개의 λ…Έλ“œμ—μ„œ kubeadm 쑰인/μ΄ˆκΈ°ν™”λ₯Ό μ‹œμž‘ν•  λ•Œ /var/lib/kubelet/kubeadm-flags.env의 λ‚΄μš©μ€ λ¬΄μ—‡μž…λ‹ˆκΉŒ?
    λ§ˆμŠ€ν„°(devstats.team.io, arm64):
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf

λ…Έλ“œ(cncftest.io, amd64):

KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf

λ…Έλ“œ(devstats.cncf.io, amd64):

KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf
  • 저널ctl -xeu kubelet의 ν₯미둜운 μ½˜ν…μΈ λŠ” μ΄κ²ƒλΏμΈκ°€μš”? λ§ˆμŠ€ν„° λ…Έλ“œμ—λ§Œ ν•΄λ‹Ήλ©λ‹ˆλ‹€. λ‹€λ₯Έ λ…Έλ“œλŠ” μ–΄λ–»μŠ΅λ‹ˆκΉŒ? github μš”μ μ΄λ‚˜ http://pastebin.com μ—μ„œ 이것듀을 버릴 수 μžˆμŠ΅λ‹ˆλ‹€.

Pastebins: λ§ˆμŠ€ν„° , λ…Έλ“œ .

κ·Έλž˜μ„œ amd64 ν˜ΈμŠ€νŠΈμ— kubeadm init λ§ˆμŠ€ν„°λ₯Ό μ„€μΉ˜ν•˜κ³  weave net μ‹œλ„ν–ˆλŠ”λ° κ²°κ³ΌλŠ” arm64 ν˜ΈμŠ€νŠΈμ—μ„œ μ‹œλ„ν•  λ•Œμ™€ μ •ν™•νžˆ κ°™μŠ΅λ‹ˆλ‹€.

  • μ‹€νŒ¨ν•œ μ»¨ν…Œμ΄λ„ˆ λ‹€μ‹œ μ‹œμž‘ λ°±μ˜€ν”„
  • runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

μž‘μ€ 진전이 μžˆμŠ΅λ‹ˆλ‹€.
amd64에 λ§ˆμŠ€ν„°λ₯Ό μ„€μΉ˜ν•œ λ‹€μŒ amd64에도 ν•˜λ‚˜μ˜ λ…Έλ“œλ₯Ό μ„€μΉ˜ν–ˆμŠ΅λ‹ˆλ‹€. λͺ¨λ‘ 잘 μž‘λ™ν–ˆμŠ΅λ‹ˆλ‹€.
arm64 λ…Έλ“œλ₯Ό μΆ”κ°€ν–ˆμœΌλ©° 이제 λ‹€μŒμ΄ μžˆμŠ΅λ‹ˆλ‹€.
λ§ˆμŠ€ν„° amd64: μ€€λΉ„
λ…Έλ“œ amd64: μ€€λΉ„
λ…Έλ“œ arm64: μ€€λΉ„ μ•ˆ 됨: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

  • λ”°λΌμ„œ flannel net ν”ŒλŸ¬κ·ΈμΈμ€ μ„œλ‘œ λ‹€λ₯Έ μ•„ν‚€ν…μ²˜ 간에 톡신할 수 μ—†κ³  arm64λ₯Ό λ§ˆμŠ€ν„°λ‘œ μ‚¬μš©ν•  수 μ—†μŠ΅λ‹ˆλ‹€.
  • Weave net ν”ŒλŸ¬κ·ΈμΈμ΄ μ „ν˜€ μž‘λ™ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€(λ…Έλ“œ μΆ”κ°€ 없이). λ§ˆμŠ€ν„°λŠ” μ•„μΉ˜κ°€ amd64이든 arm64이든 상관없이 항상 NotReady μƒνƒœμž…λ‹ˆλ‹€.
  • μ΄λŸ¬ν•œ λͺ¨λ“  κ²½μš°μ— 'NotReady'의 μ΄μœ λŠ” 항상 λ™μΌν•©λ‹ˆλ‹€. runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

μ–΄λ–€ μ œμ•ˆμ„ ν•΄μ•Ό ν•˜λ‚˜μš”? 이거 어디에 μ‹ κ³ ν•΄μ•Ό ν•˜λ‚˜μš”? λ‚˜λŠ” 이미 2λ…Έλ“œ ν΄λŸ¬μŠ€ν„°(λ§ˆμŠ€ν„° 및 λ…Έλ“œ amd64)λ₯Ό 가지고 μžˆμ§€λ§Œ 이 문제λ₯Ό ν•΄κ²°ν•˜λŠ” 데 도움을 μ£Όμ–΄ OOTB만 μžˆλŠ” λͺ¨λ“  μ•„μΉ˜ λ…Έλ“œμ™€ ν•¨κ»˜ λͺ¨λ“  μ•„μΉ˜ λ§ˆμŠ€ν„°λ₯Ό μ‚¬μš©ν•  수 μžˆλ„λ‘ ν•˜κ³  μ‹ΆμŠ΅λ‹ˆλ‹€.

@lukaszgryglicki
kube-flannel.yml λŠ” ν•˜λ‚˜μ˜ μ•„ν‚€ν…μ²˜μ— λŒ€ν•΄μ„œλ§Œ ν”Œλž€λ„¬ μ»¨ν…Œμ΄λ„ˆλ₯Ό λ°°ν¬ν•©λ‹ˆλ‹€. 이것이 μ•„ν‚€ν…μ²˜κ°€ λ‹€λ₯Έ λ…Έλ“œμ—μ„œ cni ν”ŒλŸ¬κ·ΈμΈμ΄ μ‹œμž‘λ˜μ§€ μ•Šκ³  λ…Έλ“œκ°€ μ€€λΉ„λ˜μ§€ μ•ŠλŠ” μ΄μœ μž…λ‹ˆλ‹€.

λ‚˜λŠ” 혼자 μ‹œλ„ν•œ 적이 μ—†μ§€λ§Œ, ν˜Όλ™μ„ ν”Όν•˜κΈ° μœ„ν•΄ λ‹€λ₯Έ μ˜€μ—Ό(및 이름)을 가진 두 개의 ν•΄ν‚Ήλœ ν”Œλž€λ„¬ λ§€λ‹ˆνŽ˜μŠ€νŠΈλ₯Ό 배포할 수 μžˆλ‹€κ³  μƒκ°ν•©λ‹ˆλ‹€. .

κ·ΈλŸ¬λ‚˜ νŠœν† λ¦¬μ–Όμ—μ„œ μ œμ•ˆν•œ λŒ€λ‘œ arm64μ—μ„œ λ§€λ‹ˆνŽ˜μŠ€νŠΈλ₯Ό μ‘°μ •ν–ˆμŠ΅λ‹ˆλ‹€. amd64 을 arm64 둜 λŒ€μ²΄ν–ˆμŠ΅λ‹ˆλ‹€.
λ”°λΌμ„œ flannell λŒ€ν•œ 문제λ₯Ό λ§Œλ“€κ³  이 μŠ€λ ˆλ“œμ— λŒ€ν•œ 링크λ₯Ό 뢙여넣을 수 μžˆμŠ΅λ‹ˆλ‹€.

이제 wave net κ°€ λ™μΌν•œ cni κ΄€λ ¨ λ²„κ·Έλ‘œ 두 μ•„μΉ˜μ—μ„œ λͺ¨λ‘ μ‹€νŒ¨ν•˜λŠ” μ΄μœ λŠ” λ¬΄μ—‡μž…λ‹ˆκΉŒ? weave λŒ€ν•œ λ¬Έμ œλ„ λ§Œλ“€κ³  이 μŠ€λ ˆλ“œμ— μ—°κ²°ν•˜μ‹œκ² μŠ΅λ‹ˆκΉŒ?

@lukaszgryglicki
arm에 λŒ€ν•΄ kube-flannel.yml λ₯Ό νŠΈμœ…ν•˜λ©΄ amd λ¨Έμ‹ μ—μ„œ μž‘λ™μ΄ μ€‘μ§€λ©λ‹ˆλ‹€... 이것이 λ‚΄κ°€ 잘 νŠΈμœ…λœ λ§€λ‹ˆνŽ˜μŠ€νŠΈ 2개(ν•˜λ‚˜λŠ” arm용이고 λ‹€λ₯Έ ν•˜λ‚˜λŠ” amd용)λ₯Ό λ°°ν¬ν•˜λ©΄ 문제λ₯Ό ν•΄κ²°ν•  수 μžˆλ‹€κ³  μΆ”μΈ‘ν•˜λŠ” μ΄μœ μž…λ‹ˆλ‹€.

그리고 μ§€κΈˆ 생각해보면 kube-proxy 데λͺ¬ μ„ΈνŠΈμ—μ„œλ„ 같은 문제λ₯Ό ν•΄κ²°ν•΄μ•Ό ν•  μˆ˜λ„ μžˆμ§€λ§Œ μ§€κΈˆμ€ ν…ŒμŠ€νŠΈν•  수 μ—†μŠ΅λ‹ˆλ‹€. μ£„μ†‘ν•©λ‹ˆλ‹€.


당신이 가지고 μžˆλŠ” λ¬Έμ œμ— λŒ€ν•΄μ„œλŠ” 정보가 μΆ©λΆ„ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€. ν•œ 가지 λ¬Έμ œλŠ” weaveκ°€ --pod-network-cidr=10.244.0.0/16 와 ν•¨κ»˜ μž‘λ™ν•˜μ§€ μ•ŠλŠ”λ‹€λŠ” 것일 수 μžˆμ§€λ§Œ 초기 문제둜 λŒμ•„κ°€μ„œ weaveκ°€ ν˜Όν•© ν”Œλž«νΌμ—μ„œ 기본적으둜 μž‘λ™ν•˜λŠ”μ§€ μ—¬λΆ€λ₯Ό μ €λŠ” 잘 λͺ¨λ¦…λ‹ˆλ‹€.

λ”°λΌμ„œ λ§ˆμŠ€ν„°μ— ν”Œλž€λ„¬μ— λŒ€ν•΄ 두 가지 λ‹€λ₯Έ λ§€λ‹ˆνŽ˜μŠ€νŠΈλ₯Ό 배포해야 ν•©λ‹ˆλ‹€. 맞죠? λ§ˆμŠ€ν„°κ°€ arm64이든 amd64이든 상관없겠죠? λ§ˆμŠ€ν„°κ°€ 자체 및 λ…Έλ“œμ—μ„œ μ˜¬λ°”λ₯Έ μ•„μΉ˜ 배포 생성을 μ²˜λ¦¬ν•΄μ•Ό ν•©λ‹ˆκΉŒ?
μ—¬κΈ°μ„œ 무엇을 μ˜λ―Έν•˜λŠ”μ§€ ν™•μ‹€ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€.

And now that I think of, might be you should fix the same issue with kube-proxy daemon set as well, but I can't test this now, sorry

weave --pod-network-cidr=10.244.0.0/16 λ₯Ό μ‚¬μš©ν•˜μ§€ μ•Šμ•˜μŠ΅λ‹ˆλ‹€. μ €λŠ” kubeadm init .
ν”Œλž€λ„¬ μ‹œλ„μ—λ§Œ --pod-network-cidr=10.244.0.0/16 μ‚¬μš©ν–ˆμŠ΅λ‹ˆλ‹€. λ¬Έμ„œκ°€ λ§ν•˜λŠ” κ²ƒμ²˜λŸΌ.

cc @luxas - λ©€ν‹°-아크 k8s 배포에 λŒ€ν•œ λ¬Έμ„œλ₯Ό μž‘μ„±ν•˜μ‹  것을 λ³΄μ•˜λŠ”λ° ν”Όλ“œλ°±μ΄ μžˆμœΌμ‹€κΉŒμš”?

@lukasredynk

예, κ²°κ΅­ 이것은 μ•„μΉ˜ λ¬Έμ œμž…λ‹ˆλ‹€. ν™•μΈν•΄μ£Όμ…”μ„œ κ°μ‚¬ν•©λ‹ˆλ‹€.
직쑰 λ¬Έμ œκ°€ μ ‘μ„  문제처럼 보이기 λ•Œλ¬Έμ— μ—¬κΈ°μ—μ„œ ν”Œλž€λ„¬μ— μ΄ˆμ μ„ λ§žμΆ”κ² μŠ΅λ‹ˆλ‹€.

아직 보지 λͺ»ν•œ 경우 μ»¨ν…μŠ€νŠΈμ— λŒ€ν•΄ @luxas κ°€ 이것을 λ³΄μ‹­μ‹œμ˜€.
https://github.com/luxas/kubeadm-workshop

λ§ˆμŠ€ν„°κ°€ 자체 및 λ…Έλ“œμ—μ„œ μ˜¬λ°”λ₯Έ μ•„μΉ˜ 배포 생성을 μ²˜λ¦¬ν•΄μ•Ό ν•©λ‹ˆκΉŒ?

_ν•΄μ•Ό ν•©λ‹ˆλ‹€_ ν•˜μ§€λ§Œ λ‹€μš΄λ‘œλ“œν•˜λŠ” λ§€λ‹ˆνŽ˜μŠ€νŠΈλŠ” "λš±λš±ν•œ" λ§€λ‹ˆνŽ˜μŠ€νŠΈκ°€ μ•„λ‹™λ‹ˆλ‹€.
https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

λ‚΄κ°€ μ΄ν•΄ν•˜λŠ” ν•œ μ•„μΉ˜ μ˜€μ—Όμ΄ μ „νŒŒλ˜κ³  각 λ…Έλ“œ(?)μ—μ„œ kubectl 둜 μˆ˜μ •ν•΄μ•Ό ν•©λ‹ˆλ‹€.

"λš±λš±ν•œ" λ§€λ‹ˆνŽ˜μŠ€νŠΈκ°€ λ§ˆμŠ€ν„°μ— 있고 여기에 μΆ”κ°€λœ 것 κ°™μŠ΅λ‹ˆλ‹€.
https://github.com/coreos/flannel/commit/c5d10c8b16d7c43bfa6e8408673c6d95404bd807#diff -7891b552b026259e99d479b5e30d31ca

κ΄€λ ¨ 문제/PR:
https://github.com/coreos/flannel/issues/663
https://github.com/coreos/flannel/pull/989

λ‚΄ 가정은 이것이 μ΅œμ²¨λ‹¨μ΄λ©° λ‹€μŒμ„ μ‚¬μš©ν•΄μ•Όν•œλ‹€λŠ” κ²ƒμž…λ‹ˆλ‹€.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

ν΄λŸ¬μŠ€ν„°λ₯Ό μ€‘λ‹¨ν•˜κ³  μ‹œλ„ν•΄ 보고 νš¨κ³Όκ°€ 있기λ₯Ό λ°”λžλ‹ˆλ‹€.
CNI λ¬Έμ„œμ—λŠ” 좩돌이 ν•„μš”ν•˜μ§€λ§Œ flannel-next κ°€ 릴리슀될 λ•Œ λ°œμƒν•΄μ•Ό ν•©λ‹ˆλ‹€.

μ’‹μ•„, 주말 이후에 μ‹œλ„ν•˜κ³  λ‚΄ κ²°κ³Όλ₯Ό 여기에 κ²Œμ‹œν•  κ²ƒμž…λ‹ˆλ‹€. 감사 ν•΄μš”.

@lukaszgryglicki μ•ˆλ…•ν•˜μ„Έμš”, μƒˆλ‘œμš΄ ν”Œλž€λ„¬ λ§€λ‹ˆνŽ˜μŠ€νŠΈλ₯Ό μ‚¬μš©ν•˜μ—¬ 이 μž‘μ—…μ„ μˆ˜ν–‰ν•˜μ…¨μŠ΅λ‹ˆκΉŒ?

아직 μ•„λ‹™λ‹ˆλ‹€. 였늘 μ‹œλ„ν•΄ λ³΄κ² μŠ΅λ‹ˆλ‹€.

확인이 λ§ˆμΉ¨λ‚΄ μž‘λ™ν–ˆμŠ΅λ‹ˆλ‹€.

root<strong i="6">@devstats</strong>:/root# kubectl get nodes
NAME               STATUS    ROLES     AGE       VERSION
cncftest.io        Ready     <none>    39s       v1.11.1
devstats.cncf.io   Ready     <none>    46s       v1.11.1
devstats.team.io   Ready     master    12m       v1.11.1

ν”Œλž€λ„¬ master μ§€μ μ˜ λš±λš±ν•œ λ©”μΈνŽ˜μŠ€νŠΈ κ°€ 도움이 λ˜μ—ˆμŠ΅λ‹ˆλ‹€.
κ°μ‚¬ν•©λ‹ˆλ‹€. 닫을 수 μžˆμŠ΅λ‹ˆλ‹€.

μ•ˆλ…•ν•˜μ„Έμš” μ—¬λŸ¬λΆ„, 저도 같은 μƒν™©μž…λ‹ˆλ‹€.
μž‘μ—…μž λ…Έλ“œκ°€ μ€€λΉ„ μƒνƒœμ— μžˆμ§€λ§Œ arm64의 ν”Œλž€λ„¬μ΄ λ‹€μŒ 였λ₯˜μ™€ ν•¨κ»˜ 계속 μΆ©λŒν•©λ‹ˆλ‹€.
1 main.go:232] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-arm64-m5jfd': Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-arm64-m5jfd: dial tcp 10.96.0.1:443: i/o timeout
@lukasredynk 효과 κ°€ μžˆμ—ˆλ‚˜μš”?

μ–΄λ–€ 생각?

였λ₯˜λŠ” λ‹€λ₯Έ 것 κ°™μ§€λ§Œ λš±λš±ν•œ λ§€λ‹ˆνŽ˜μŠ€νŠΈλ₯Ό μ‚¬μš©ν•˜μ…¨μŠ΅λ‹ˆκΉŒ: https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ?
μ—¬κΈ°μ—λŠ” μ—¬λŸ¬ μ•„μΉ˜μ— λŒ€ν•œ λ§€λ‹ˆνŽ˜μŠ€νŠΈκ°€ ν¬ν•¨λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€.

λ„€ μ €λŠ”:
image

이제 λ¬Έμ œλŠ” νŒ”μ— 닿지 μ•ŠλŠ” ν”Œλž€λ„¬ μš©κΈ°μž…λ‹ˆλ‹€. :(

그것은 amd64 및 arm64 μž‘λ™ν•©λ‹ˆλ‹€.
λΆˆν–‰νžˆλ„ arm (32λΉ„νŠΈ)에 λŒ€ν•΄μ„œλŠ” 도움을 λ“œλ¦΄ 수 μ—†μŠ΅λ‹ˆλ‹€. μ‚¬μš© κ°€λŠ₯ν•œ arm 머신이 μ—†μŠ΅λ‹ˆλ‹€.

λ‚˜λŠ” arm64에 μžˆμ§€λ§Œ κ°μ‚¬ν•©λ‹ˆλ‹€. 계속 μ‘°μ‚¬ν•˜κ² μŠ΅λ‹ˆλ‹€...

였, 그럼 μ£„μ†‘ν•©λ‹ˆλ‹€, 당신이 νŒ”μ— μžˆλ‹€κ³  μƒκ°ν–ˆμŠ΅λ‹ˆλ‹€.
μ–΄μ¨Œλ“ , λ‚˜λŠ” 이것에 μ•„μ£Ό μƒˆλ‘­κΈ° λ•Œλ¬Έμ— λ‹€λ₯Έ μ‚¬λžŒλ“€μ΄ λ„μšΈ λ•ŒκΉŒμ§€ κΈ°λ‹€λ €μ•Όν•©λ‹ˆλ‹€.
이 μŠ€λ ˆλ“œμ— κ²Œμ‹œν•œ kubectl describe pods --all-namespace 좜λ ₯κ³Ό λ‹€λ₯Έ λͺ…λ Ήμ˜ κ°€λŠ₯ν•œ 좜λ ₯을 λΆ™μ—¬λ„£μœΌμ‹­μ‹œμ˜€. 이것은 λˆ„κ΅°κ°€κ°€ μ‹€μ œ 문제λ₯Ό μΆ”μ ν•˜λŠ” 데 도움이 될 수 μžˆμŠ΅λ‹ˆλ‹€.

@lukaszgryglicki 감사
이것은 μ„€λͺ… ν¬λ“œμ˜ 좜λ ₯μž…λ‹ˆλ‹€. https://pastebin.com/kBVPYsMd

@lukaszgryglicki
κ²°κ΅­ νš¨κ³Όκ°€ μžˆμ–΄μ„œ λ‹€ν–‰μž…λ‹ˆλ‹€.
0.11.0이 μ–Έμ œ μΆœμ‹œλ μ§€ λͺ¨λ₯΄κΈ° λ•Œλ¬Έμ— λ‚˜λŠ” λ¬Έμ„œμ—μ„œ flannel에 λŒ€ν•œ λš±λš±ν•œ λ§€λ‹ˆνŽ˜μŠ€νŠΈ μ‚¬μš©λ²•μ„ λ¬Έμ„œν™”ν•  κ²ƒμž…λ‹ˆλ‹€.

@Leen15

μ‹€νŒ¨ν•œ ν¬λ“œμ—μ„œ κ΄€λ ¨:

  Warning  FailedCreatePodSandBox  3m (x5327 over 7h)  kubelet, nanopi-neo-plus2  (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9ddb551d520a757f4f8ff81d1dbfde50a98a5ec65385673a5a49a79e23a3243b" network for pod "arm-test-7894bfffd-njdcc": NetworkPlugin cni failed to set up pod "arm-test-7894bfffd-njdcc_default" network: open /run/flannel/subnet.env: no such file or directory

ν”Œλž€λ„¬μ— ν•„μš”ν•œ --pod-network-cidr=... λ₯Ό μΆ”κ°€ν•˜μ‹œκ² μŠ΅λ‹ˆκΉŒ?

λ˜ν•œ 이 κ°€μ΄λ“œλ₯Ό μ‹œλ„ν•˜μ‹­μ‹œμ˜€:
https://github.com/kubernetes/kubernetes/issues/36575#issuecomment -264622923

@neolit123 예, 문제λ₯Ό μ°Ύμ•˜μŠ΅λ‹ˆλ‹€. flannel이 가상 λ„€νŠΈμ›Œν¬ μΈν„°νŽ˜μ΄μŠ€(cni 및 flannel0)λ₯Ό μƒμ„±ν•˜μ§€ μ•Šμ•˜μŠ΅λ‹ˆλ‹€.
이유λ₯Ό λͺ¨λ₯΄κ² κ³  λͺ‡ μ‹œκ°„ 후에 ν•΄κ²°ν•˜μ§€ λͺ»ν–ˆμŠ΅λ‹ˆλ‹€.
ν¬κΈ°ν•˜κ³  μŠ€μ›œμœΌλ‘œ λ°”κΏ¨μŠ΅λ‹ˆλ‹€.

μ•Œκ² μŠ΅λ‹ˆλ‹€. 이 경우 문제λ₯Ό λ‹«μŠ΅λ‹ˆλ‹€.
감사.

저도 같은 λ¬Έμ œμ— λΆ€λ”ͺν˜”λŠ”λ° μ€‘κ΅­μ˜ GFW λ•Œλ¬Έμ— λ…Έλ“œκ°€ ν•„μš”ν•œ 이미지λ₯Ό κ°€μ Έμ˜¬ 수 μ—†λ‹€λŠ” 것을 μ•Œκ²Œλ˜μ–΄ μˆ˜λ™μœΌλ‘œ 이미지λ₯Ό κ°€μ Έ μ™€μ„œ μ •μƒμ μœΌλ‘œ λ³΅κ΅¬λ©λ‹ˆλ‹€.

이 λͺ…령을 μ‹€ν–‰ν•˜λ©΄ λ‚΄ λ¬Έμ œκ°€ ν•΄κ²°λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

  1. kubectl 적용 -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

그러면 /etc/cni/net.d 디렉토리에 10-flannel.conflistλΌλŠ” μ΄λ¦„μ˜ 파일이 μƒμ„±λ©λ‹ˆλ‹€. λ‚˜λŠ” kubernetesκ°€ 이 νŒ¨ν‚€μ§€μ— μ˜ν•΄ μ„€μ •λ˜λŠ” λ„€νŠΈμ›Œν¬λ₯Ό ν•„μš”λ‘œ ν•œλ‹€κ³  μƒκ°ν•©λ‹ˆλ‹€.
λ‚΄ ν΄λŸ¬μŠ€ν„°λŠ” λ‹€μŒ μƒνƒœμ— μžˆμŠ΅λ‹ˆλ‹€.

이름 μƒνƒœ μ—­ν•  λ‚˜μ΄ 버전
k8s-master λ ˆλ”” λ§ˆμŠ€ν„° 3h37m v1.14.1
node001 μ€€λΉ„3h6m v1.14.1
node02 μ€€λΉ„167m v1.14.1

μ•ˆλ…• λͺ¨λ‘,

1개의 λ§ˆμŠ€ν„°μ™€ 2개의 λ…Έλ“œκ°€ μžˆμŠ΅λ‹ˆλ‹€. 두 번째 λ…Έλ“œλŠ” μ€€λΉ„λ˜μ§€ μ•Šμ€ μƒνƒœμž…λ‹ˆλ‹€.

root@kube1 :~# kubectl λ…Έλ“œ κ°€μ Έμ˜€κΈ°
이름 μƒνƒœ μ—­ν•  λ‚˜μ΄ 버전
dockerlab1 μ€€λΉ„3h57m v1.14.3
kube1 λ ˆλ”” λ§ˆμŠ€ν„° 4h12m v1.14.3
labserver1 μ€€λΉ„ μ•ˆ 됨22m v1.14.3


root@kube1 :~# kubectl get pods --all-namespaces
λ„€μž„μŠ€νŽ˜μ΄μŠ€ 이름 μ€€λΉ„ μƒνƒœ λ‹€μ‹œ μ‹œμž‘ λ‚˜μ΄
kube-system coredns-fb8b8dccf-72llr 1/1 μ‹€ν–‰ 0 4h13m
kube-system coredns-fb8b8dccf-n9v82 1/1 μ‹€ν–‰ 0 4h13m
kube-system etcd-kube1 1/1 μ‹€ν–‰ 0 4h12m
kube-system kube-apiserver-kube1 1/1 μ‹€ν–‰ 0 4h12m
kube-system kube-controller-manager-kube1 1/1 μ‹€ν–‰ 0 4h13m
kube-system kube-flannel-ds-amd64-6q6sz 0/1 μ΄ˆκΈ°ν™”:0/1 0 24m
kube-system kube-flannel-ds-amd64-rshnj 1/1 μ‹€ν–‰ 0 3h59m
kube-system kube-flannel-ds-amd64-xsj72 1/1 μ‹€ν–‰ 0 4h1m
kube-system kube-proxy-7m8jg 1/1 μ‹€ν–‰ 0 3h59m
kube-system kube-proxy-m7gdc 0/1 ContainerCreating 0 24m
kube-system kube-proxy-xgq6p 1/1 μ‹€ν–‰ 0 4h13m

kube-system kube-scheduler-kube1 1/1 μ‹€ν–‰ 0 4h13m

root@kube1 :~# kubectl은 labserver1 λ…Έλ“œλ₯Ό μ„€λͺ…ν•©λ‹ˆλ‹€.
이름: labserver1
μ—­ν• :
λ ˆμ΄λΈ”: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=labserver1
kubernetes.io/os=λ¦¬λˆ…μŠ€
주석: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volume.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: 2019λ…„ 6μ›” 9일 μΌμš”μΌ 21:03:57 +0800
Taints : node.kubernetes.io/not- μ€€λΉ„ : NOEXECUTE
node.kubernetes.io/not- μ€€λΉ„ : NoSchedule
μ˜ˆμ•½ λΆˆκ°€: 거짓
μ •ν™©:
μœ ν˜• μƒνƒœ LastHeartbeatTime LastTransitionTime 이유 λ©”μ‹œμ§€
---- ------ ------ ------------------ ------ -------
MemoryPressure False Sun, 09 June 2019 21:28:31 +0800 Sun, 09 June 2019 21:03:57 +0800 KubeletHasSufficientMemory kubelet에 μ‚¬μš© κ°€λŠ₯ν•œ λ©”λͺ¨λ¦¬κ°€ μΆ©λΆ„ν•©λ‹ˆλ‹€.
DiskPressure False 2019λ…„ 6μ›” 9일 μΌμš”μΌ 21:28:31 +0800 2019λ…„ 6μ›” 9일 μΌμš”μΌ 21:03:57 +0800 KubeletHasNoDiskPressure kubelet에 λ””μŠ€ν¬ μ••λ ₯이 μ—†μŠ΅λ‹ˆλ‹€.
PIDPressure False 2019λ…„ 6μ›” 9일 μΌμš”μΌ 21:28:31 +0800 2019λ…„ 6μ›” 9일 μΌμš”μΌ 21:03:57 +0800 KubeletHasSufficientPID kubelet에 μ‚¬μš© κ°€λŠ₯ν•œ PIDκ°€ 좩뢄함
Ready False 2019λ…„ 6μ›” 9일 μΌμš”μΌ 21:28:31 +0800 2019λ…„ 6μ›” 9일 μΌμš”μΌ 21:03:57 +0800 KubeletNotReady λŸ°νƒ€μž„ λ„€νŠΈμ›Œν¬κ°€ μ€€λΉ„λ˜μ§€ μ•ŠμŒ: NetworkReady=false 이유:NetworkPluginNotReady λ©”μ‹œμ§€:docker : λ„€νŠΈμ›Œν¬ ν”ŒλŸ¬κ·ΈμΈμ΄ μ€€λΉ„λ˜μ§€ μ•ŠμŒ: cni config μ΄ˆκΈ°ν™”λ˜μ§€ μ•Šμ€
ꡬ애:
λ‚΄λΆ€IP: 172.31.8.125
호슀트 이름: labserver1
μš©λŸ‰:
CPU: 1
μž„μ‹œ μ €μž₯ μž₯치: 18108284Ki
κ±°λŒ€ νŽ˜μ΄μ§€-1Gi: 0
κ±°λŒ€ νŽ˜μ΄μ§€-2Mi: 0
λ©”λͺ¨λ¦¬: 1122528Ki
ν¬λ“œ: 110
ν• λ‹Ή κ°€λŠ₯:
CPU: 1
μž„μ‹œ μ €μž₯ μž₯치: 16688594507
κ±°λŒ€ νŽ˜μ΄μ§€-1Gi: 0
κ±°λŒ€ νŽ˜μ΄μ§€-2Mi: 0
λ©”λͺ¨λ¦¬: 1020128Ki
ν¬λ“œ: 110
μ‹œμŠ€ν…œ 정보:
λ¨Έμ‹  ID: 292dc4560f9309ccdd72b6935c80e8ec
μ‹œμŠ€ν…œ UUID: DE4707DF-5516-784A-9B41-588FCDE49369
λΆ€νŒ… ID: 828d124c-b687-43f6-bffa-6a3e1e6e17e6
컀널 버전: 4.4.0-142-일반
OS 이미지: μš°λΆ„νˆ¬ 16.04.6 LTS
운영 체제: λ¦¬λˆ…μŠ€
μ•„ν‚€ν…μ²˜: amd64
μ»¨ν…Œμ΄λ„ˆ λŸ°νƒ€μž„ 버전: docker://18.9.6
Kubelet 버전: v1.14.3
Kube-ν”„λ‘μ‹œ 버전: v1.14.3
PodCIDR: 10.244.3.0/24
μ’…λ£Œλ˜μ§€ μ•Šμ€ ν¬λ“œ: (총 2개)
λ„€μž„μŠ€νŽ˜μ΄μŠ€ 이름 CPU μš”μ²­ CPU μ œν•œ λ©”λͺ¨λ¦¬ μš”μ²­ λ©”λͺ¨λ¦¬ μ œν•œ AGE
--------- ---- ------------ ---------- ------------------ ---------- ---
kube-system kube-flannel-ds-amd64-6q6sz 100m(10%) 100m(10%) 50Mi(5%) 50Mi(5%) 25m
kube-system kube-proxy-m7gdc 0(0%) 0(0%) 0(0%) 0(0%) 25m
ν• λ‹Ήλœ λ¦¬μ†ŒμŠ€:
(총 μ œν•œμ€ 100%λ₯Ό μ΄ˆκ³Όν•  수 μžˆμŠ΅λ‹ˆλ‹€. 즉, 초과 컀밋될 수 μžˆμŠ΅λ‹ˆλ‹€.)
λ¦¬μ†ŒμŠ€ μš”μ²­ μ œν•œ
-------- -------- ------
CPU 100m(10%) 100m(10%)
λ©”λͺ¨λ¦¬ 50Mi(5%) 50Mi(5%)
μž„μ‹œ μŠ€ν† λ¦¬μ§€ 0(0%) 0(0%)
이벀트:
λ©”μ‹œμ§€μ—μ„œ 이유 λ‚˜μ΄ μž…λ ₯
---- ------ ---- ---- -------
정상 μ‹œμž‘ 45m kubelet, labserver1 kubelet μ‹œμž‘.
일반 NodeHasSufficientMemory 45m kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasSufficientMemoryμž…λ‹ˆλ‹€.
정상 NodeHasNoDiskPressure 45m kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasNoDiskPressureμž…λ‹ˆλ‹€.
일반 NodeHasSufficientPID 45m kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasSufficientPIDμž…λ‹ˆλ‹€.
일반 NodeAllocatableEnforced 45m kubelet, labserver1 ν¬λ“œμ—μ„œ λ…Έλ“œ ν• λ‹Ή κ°€λŠ₯ μ œν•œ μ—…λ°μ΄νŠΈλ¨
일반 μ‹œμž‘ 25m kubelet, labserver1 kubelet μ‹œμž‘.
일반 NodeAllocatableEnforced 25m kubelet, labserver1 ν¬λ“œ 전체에 걸쳐 λ…Έλ“œ ν• λ‹Ή κ°€λŠ₯ μ œν•œ μ—…λ°μ΄νŠΈλ¨
일반 NodeHasSufficientMemory 25m(25m 초과 x2) kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasSufficientMemoryμž…λ‹ˆλ‹€.
일반 NodeHasSufficientPID 25m(25m 초과 x2) kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasSufficientPIDμž…λ‹ˆλ‹€.
일반 NodeHasNoDiskPressure 25m(25m 초과 x2) kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasNoDiskPressureμž…λ‹ˆλ‹€.
일반 μ‹œμž‘ 13m kubelet, labserver1 kubelet μ‹œμž‘.
일반 NodeHasSufficientMemory 13m kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasSufficientMemoryμž…λ‹ˆλ‹€.
정상 NodeHasNoDiskPressure 13m kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasNoDiskPressureμž…λ‹ˆλ‹€.
일반 NodeHasSufficientPID 13m kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasSufficientPIDμž…λ‹ˆλ‹€.
일반 NodeAllocatableEnforced 13m kubelet, labserver1 ν¬λ“œ 전체에 걸쳐 λ…Έλ“œ ν• λ‹Ή κ°€λŠ₯ μ œν•œ μ—…λ°μ΄νŠΈλ¨
root@kube1 :~#

λ„μ™€μ£Όμ„Έμš”

μ•ˆλ…• λͺ¨λ‘,

1개의 λ§ˆμŠ€ν„°μ™€ 2개의 λ…Έλ“œκ°€ μžˆμŠ΅λ‹ˆλ‹€. 두 번째 λ…Έλ“œλŠ” μ€€λΉ„λ˜μ§€ μ•Šμ€ μƒνƒœμž…λ‹ˆλ‹€.

root@kube1 :~# kubectl λ…Έλ“œ κ°€μ Έμ˜€κΈ°
이름 μƒνƒœ μ—­ν•  λ‚˜μ΄ 버전
dockerlab1 λ ˆλ”” 3h57m v1.14.3
kube1 λ ˆλ”” λ§ˆμŠ€ν„° 4h12m v1.14.3
labserver1 NotReady 22m v1.14.3

root@kube1 :~# kubectl get pods --all-namespaces

λ„€μž„μŠ€νŽ˜μ΄μŠ€ 이름 μ€€λΉ„ μƒνƒœ λ‹€μ‹œ μ‹œμž‘ λ‚˜μ΄
kube-system coredns-fb8b8dccf-72llr 1/1 μ‹€ν–‰ 0 4h13m
kube-system coredns-fb8b8dccf-n9v82 1/1 μ‹€ν–‰ 0 4h13m
kube-system etcd-kube1 1/1 μ‹€ν–‰ 0 4h12m
kube-system kube-apiserver-kube1 1/1 μ‹€ν–‰ 0 4h12m
kube-system kube-controller-manager-kube1 1/1 μ‹€ν–‰ 0 4h13m
kube-system kube-flannel-ds-amd64-6q6sz 0/1 μ΄ˆκΈ°ν™”:0/1 0 24m
kube-system kube-flannel-ds-amd64-rshnj 1/1 μ‹€ν–‰ 0 3h59m
kube-system kube-flannel-ds-amd64-xsj72 1/1 μ‹€ν–‰ 0 4h1m
kube-system kube-proxy-7m8jg 1/1 μ‹€ν–‰ 0 3h59m
kube-system kube-proxy-m7gdc 0/1 ContainerCreating 0 24m
kube-system kube-proxy-xgq6p 1/1 μ‹€ν–‰ 0 4h13m
kube-system kube-scheduler-kube1 1/1 μ‹€ν–‰ 0 4h13m
root@kube1 :~# kubectl은 labserver1 λ…Έλ“œλ₯Ό μ„€λͺ…ν•©λ‹ˆλ‹€.
이름: labserver1
μ—­ν• :
λ ˆμ΄λΈ”: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=labserver1
kubernetes.io/os=λ¦¬λˆ…μŠ€
주석: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volume.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: 2019λ…„ 6μ›” 9일 μΌμš”μΌ 21:03:57 +0800
Taints : node.kubernetes.io/not- μ€€λΉ„ : NOEXECUTE
node.kubernetes.io/not- μ€€λΉ„ : NoSchedule
μ˜ˆμ•½ λΆˆκ°€: 거짓
μ •ν™©:
μœ ν˜• μƒνƒœ LastHeartbeatTime LastTransitionTime 이유 λ©”μ‹œμ§€

MemoryPressure False Sun, 09 June 2019 21:28:31 +0800 Sun, 09 June 2019 21:03:57 +0800 KubeletHasSufficientMemory kubelet에 μ‚¬μš© κ°€λŠ₯ν•œ λ©”λͺ¨λ¦¬κ°€ μΆ©λΆ„ν•©λ‹ˆλ‹€.
DiskPressure False 2019λ…„ 6μ›” 9일 μΌμš”μΌ 21:28:31 +0800 2019λ…„ 6μ›” 9일 μΌμš”μΌ 21:03:57 +0800 KubeletHasNoDiskPressure kubelet에 λ””μŠ€ν¬ μ••λ ₯이 μ—†μŠ΅λ‹ˆλ‹€.
PIDPressure False 2019λ…„ 6μ›” 9일 μΌμš”μΌ 21:28:31 +0800 2019λ…„ 6μ›” 9일 μΌμš”μΌ 21:03:57 +0800 KubeletHasSufficientPID kubelet에 μ‚¬μš© κ°€λŠ₯ν•œ PIDκ°€ 좩뢄함
Ready False 2019λ…„ 6μ›” 9일 μΌμš”μΌ 21:28:31 +0800 2019λ…„ 6μ›” 9일 μΌμš”μΌ 21:03:57 +0800 KubeletNotReady λŸ°νƒ€μž„ λ„€νŠΈμ›Œν¬κ°€ μ€€λΉ„λ˜μ§€ μ•ŠμŒ: NetworkReady=false 이유:NetworkPluginNotReady λ©”μ‹œμ§€:docker : λ„€νŠΈμ›Œν¬ ν”ŒλŸ¬κ·ΈμΈμ΄ μ€€λΉ„λ˜μ§€ μ•ŠμŒ: cni config μ΄ˆκΈ°ν™”λ˜μ§€ μ•Šμ€
ꡬ애:
λ‚΄λΆ€IP: 172.31.8.125
호슀트 이름: labserver1
μš©λŸ‰:
CPU: 1
μž„μ‹œ μ €μž₯ μž₯치: 18108284Ki
κ±°λŒ€ νŽ˜μ΄μ§€-1Gi: 0
κ±°λŒ€ νŽ˜μ΄μ§€-2Mi: 0
λ©”λͺ¨λ¦¬: 1122528Ki
ν¬λ“œ: 110
ν• λ‹Ή κ°€λŠ₯:
CPU: 1
μž„μ‹œ μ €μž₯ μž₯치: 16688594507
κ±°λŒ€ νŽ˜μ΄μ§€-1Gi: 0
κ±°λŒ€ νŽ˜μ΄μ§€-2Mi: 0
λ©”λͺ¨λ¦¬: 1020128Ki
ν¬λ“œ: 110
μ‹œμŠ€ν…œ 정보:
λ¨Έμ‹  ID: 292dc4560f9309ccdd72b6935c80e8ec
μ‹œμŠ€ν…œ UUID: DE4707DF-5516-784A-9B41-588FCDE49369
λΆ€νŒ… ID: 828d124c-b687-43f6-bffa-6a3e1e6e17e6
컀널 버전: 4.4.0-142-일반
OS 이미지: μš°λΆ„νˆ¬ 16.04.6 LTS
운영 체제: λ¦¬λˆ…μŠ€
μ•„ν‚€ν…μ²˜: amd64
μ»¨ν…Œμ΄λ„ˆ λŸ°νƒ€μž„ 버전: docker://18.9.6
Kubelet 버전: v1.14.3
Kube-ν”„λ‘μ‹œ 버전: v1.14.3
PodCIDR: 10.244.3.0/24
μ’…λ£Œλ˜μ§€ μ•Šμ€ ν¬λ“œ: (총 2개)
λ„€μž„μŠ€νŽ˜μ΄μŠ€ 이름 CPU μš”μ²­ CPU μ œν•œ λ©”λͺ¨λ¦¬ μš”μ²­ λ©”λͺ¨λ¦¬ μ œν•œ AGE

kube-system kube-flannel-ds-amd64-6q6sz 100m(10%) 100m(10%) 50Mi(5%) 50Mi(5%) 25m
kube-system kube-proxy-m7gdc 0(0%) 0(0%) 0(0%) 0(0%) 25m
ν• λ‹Ήλœ λ¦¬μ†ŒμŠ€:
(총 μ œν•œμ€ 100%λ₯Ό μ΄ˆκ³Όν•  수 μžˆμŠ΅λ‹ˆλ‹€. 즉, 초과 컀밋될 수 μžˆμŠ΅λ‹ˆλ‹€.)
λ¦¬μ†ŒμŠ€ μš”μ²­ μ œν•œ

CPU 100m(10%) 100m(10%)
λ©”λͺ¨λ¦¬ 50Mi(5%) 50Mi(5%)
μž„μ‹œ μŠ€ν† λ¦¬μ§€ 0(0%) 0(0%)
이벀트:
λ©”μ‹œμ§€μ—μ„œ 이유 λ‚˜μ΄ μž…λ ₯

정상 μ‹œμž‘ 45m kubelet, labserver1 kubelet μ‹œμž‘.
일반 NodeHasSufficientMemory 45m kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasSufficientMemoryμž…λ‹ˆλ‹€.
정상 NodeHasNoDiskPressure 45m kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasNoDiskPressureμž…λ‹ˆλ‹€.
일반 NodeHasSufficientPID 45m kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasSufficientPIDμž…λ‹ˆλ‹€.
일반 NodeAllocatableEnforced 45m kubelet, labserver1 ν¬λ“œμ—μ„œ λ…Έλ“œ ν• λ‹Ή κ°€λŠ₯ μ œν•œ μ—…λ°μ΄νŠΈλ¨
일반 μ‹œμž‘ 25m kubelet, labserver1 kubelet μ‹œμž‘.
일반 NodeAllocatableEnforced 25m kubelet, labserver1 ν¬λ“œ 전체에 걸쳐 λ…Έλ“œ ν• λ‹Ή κ°€λŠ₯ μ œν•œ μ—…λ°μ΄νŠΈλ¨
일반 NodeHasSufficientMemory 25m(25m 초과 x2) kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasSufficientMemoryμž…λ‹ˆλ‹€.
일반 NodeHasSufficientPID 25m(25m 초과 x2) kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasSufficientPIDμž…λ‹ˆλ‹€.
일반 NodeHasNoDiskPressure 25m(25m 초과 x2) kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasNoDiskPressureμž…λ‹ˆλ‹€.
일반 μ‹œμž‘ 13m kubelet, labserver1 kubelet μ‹œμž‘.
일반 NodeHasSufficientMemory 13m kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasSufficientMemoryμž…λ‹ˆλ‹€.
정상 NodeHasNoDiskPressure 13m kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasNoDiskPressureμž…λ‹ˆλ‹€.
일반 NodeHasSufficientPID 13m kubelet, labserver1 λ…Έλ“œ labserver1 μƒνƒœλŠ” ν˜„μž¬ NodeHasSufficientPIDμž…λ‹ˆλ‹€.
일반 NodeAllocatableEnforced 13m kubelet, labserver1 ν¬λ“œ 전체에 걸쳐 λ…Έλ“œ ν• λ‹Ή κ°€λŠ₯ μ œν•œ μ—…λ°μ΄νŠΈλ¨
root@kube1 :~#

λ„μ™€μ£Όμ„Έμš”

μ•ˆλ…•ν•˜μ„Έμš” μ•„ν‹°λ₯΄λ‹˜

         Please check out logs in /var/logs/messages section of your Master node. You can find an actual error in those logs. But here are some general tips.

λ‚˜. 항상 λ¨Όμ € λ§ˆμŠ€ν„° λ…Έλ“œμ— μ§‘μ€‘ν•˜μ‹­μ‹œμ˜€.
ii. docker 엔진을 μ„€μΉ˜ν•˜κ³  kubernetes에 μ‚¬μš© 쀑인 λͺ¨λ“  이미지λ₯Ό κ°€μ Έμ˜΅λ‹ˆλ‹€. λͺ¨λ“  것이 μ‹€ν–‰ 쀑인 ν˜•νƒœκ°€ 되면 λ§ˆμŠ€ν„°μ— λ…Έλ“œλ₯Ό μΆ”κ°€ν•©λ‹ˆλ‹€. λͺ¨λ“  문제λ₯Ό ν•΄κ²°ν•  κ²ƒμž…λ‹ˆλ‹€. μΈν„°λ„·μ—μ„œ 슬레이브 λ…Έλ“œλ₯Ό μ—°κ²°ν•œ ν›„ 이미지λ₯Ό κ°€μ Έμ˜€λ €λŠ” 기사λ₯Ό λ³Έ 적이 μžˆμŠ΅λ‹ˆλ‹€. κ·Έ 관행은 μ•½κ°„μ˜ 문제λ₯Ό μΌμœΌν‚΅λ‹ˆλ‹€.

μ•ˆλ…•ν•˜μ„Έμš” Saddique164, μ œμ•ˆν•΄μ£Όμ…”μ„œ κ°μ‚¬ν•©λ‹ˆλ‹€. λ„€ λ§μ”€ν•˜μ‹ λŒ€λ‘œ μ–΄μ œ μƒˆλ‘œμš΄ 슬레이브 λ…Έλ“œλ₯Ό λ°°μΉ˜ν–ˆκ³  문제 없이 λ§ˆμŠ€ν„°μ— ν•©λ₯˜ν•  수 μžˆμ—ˆμŠ΅λ‹ˆλ‹€.

μ£„μ†‘ν•©λ‹ˆλ‹€. λ„μ™€λ“œλ¦΄ 수 μ—†μŠ΅λ‹ˆλ‹€. 더 이상 ARM64 λ…Έλ“œκ°€ μ—†μŠ΅λ‹ˆλ‹€. 이제 4λ…Έλ“œ AMD64 λ² μ–΄λ©”νƒˆ ν΄λŸ¬μŠ€ν„°κ°€ μžˆμŠ΅λ‹ˆλ‹€.

/etc/cni/net.d/10-flannel.conflist 파일의 κ΅¬μ„±μ—μ„œ cniVersion ν‚€κ°€ λˆ„λ½λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

"cniVersion": "0.2.0"을 μΆ”κ°€ν•˜λ©΄ λ¬Έμ œκ°€ ν•΄κ²°λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

/etc/cni/net.d/10-flannel.conflist 파일의 κ΅¬μ„±μ—μ„œ cniVersion ν‚€κ°€ λˆ„λ½λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

"cniVersion": "0.2.0"을 μΆ”κ°€ν•˜λ©΄ λ¬Έμ œκ°€ ν•΄κ²°λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

1.15μ—μ„œ V1.16.0으둜 μ—…λ°μ΄νŠΈν–ˆμ„ λ•Œ λ¬Έμ œκ°€ λ°œμƒν–ˆμŠ΅λ‹ˆλ‹€.

dGVzdDoxMjPCow

2019λ…„ 9μ›” 24일 12μ‹œ 52뢄에 "ronakgpatel" [email protected]이 μž‘μ„±ν–ˆμŠ΅λ‹ˆλ‹€.

/etc/cni/net.d/10-flannel.conflist νŒŒμΌμ— cniVersion ν‚€κ°€ μ—†μŠ΅λ‹ˆλ‹€.
κ·Έκ²ƒμ˜ ꡬ성.

"cniVersion": "0.2.0"을 μΆ”κ°€ν•˜λ©΄ λ¬Έμ œκ°€ ν•΄κ²°λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

1.15μ—μ„œ V1.16.0으둜 μ—…λ°μ΄νŠΈν–ˆμ„ λ•Œ λ¬Έμ œκ°€ λ°œμƒν–ˆμŠ΅λ‹ˆλ‹€.

β€”
이 μŠ€λ ˆλ“œμ— κ°€μž…ν–ˆκΈ° λ•Œλ¬Έμ— 이 λ©”μ‹œμ§€λ₯Ό λ°›κ³  μžˆμŠ΅λ‹ˆλ‹€.
이 이메일에 직접 λ‹΅μž₯ν•˜κ³  GitHubμ—μ„œ ν™•μΈν•˜μ„Έμš”.
https://github.com/kubernetes/kubeadm/issues/1031?email_source=notifications&email_token=AND2HJTXHB6WSGAE7PSOAJTQLGMKVA5CNFSM4FNQHEHKYY3ZIZWWWK3TUL52HS4DFVEXG43VMVBW63
λ˜λŠ” μŠ€λ ˆλ“œ μŒμ†Œκ±°
https://github.com/notifications/unsubscribe-auth/AND2HJXDKQARYKXVY4YLZMDQLGMKVANCNFSM4FNQHEHA
.

ν”Œλž€λ„¬μ€ 그닀지 적극적으둜 μœ μ§€λ˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€. μ €λŠ” μ˜₯μ–‘λͺ©μ΄λ‚˜ μœ„λΈŒλ„·μ„ μΆ”μ²œν•©λ‹ˆλ‹€.

ν”Œλž€λ„¬ μ €μž₯μ†ŒλŠ” μˆ˜μ •μ΄ ν•„μš”ν–ˆμŠ΅λ‹ˆλ‹€.
flannel μ„€μΉ˜λ₯Ό μœ„ν•œ kubeadm κ°€μ΄λ“œκ°€ 방금 μ—…λ°μ΄νŠΈλ˜μ—ˆμŠ΅λ‹ˆλ‹€. λ‹€μŒμ„ μ°Έμ‘°ν•˜μ„Έμš”.
https://github.com/kubernetes/website/pull/16575/files

μ—¬κΈ°μ—μ„œ 같은 λ¬Έμ œμ— μ§λ©΄ν–ˆμŠ΅λ‹ˆλ‹€.
kubectl 적용 -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml.

λ‚˜λ₯Ό μœ„ν•΄ μΌν–ˆλ‹€.

도컀: λ„€νŠΈμ›Œν¬ ν”ŒλŸ¬κ·ΈμΈμ΄ μ€€λΉ„λ˜μ§€ μ•Šμ•˜μŠ΅λ‹ˆλ‹€: cni ꡬ성이 μ΄ˆκΈ°ν™”λ˜μ§€ μ•Šμ•˜μŠ΅λ‹ˆλ‹€.

notready λ…Έλ“œμ— dockerλ₯Ό λ‹€μ‹œ μ„€μΉ˜ν•©λ‹ˆλ‹€.
λ‚˜λ₯Ό μœ„ν•΄ μΌν–ˆλ‹€.

이 λͺ…령을 μ‹€ν–‰ν•˜λ©΄ λ‚΄ λ¬Έμ œκ°€ ν•΄κ²°λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

  1. kubectl 적용 -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

그러면 /etc/cni/net.d 디렉토리에 10-flannel.conflistλΌλŠ” μ΄λ¦„μ˜ 파일이 μƒμ„±λ©λ‹ˆλ‹€. λ‚˜λŠ” kubernetesκ°€ 이 νŒ¨ν‚€μ§€μ— μ˜ν•΄ μ„€μ •λ˜λŠ” λ„€νŠΈμ›Œν¬λ₯Ό ν•„μš”λ‘œ ν•œλ‹€κ³  μƒκ°ν•©λ‹ˆλ‹€.
λ‚΄ ν΄λŸ¬μŠ€ν„°λŠ” λ‹€μŒ μƒνƒœμ— μžˆμŠ΅λ‹ˆλ‹€.

이름 μƒνƒœ μ—­ν•  λ‚˜μ΄ 버전
k8s-master λ ˆλ”” λ§ˆμŠ€ν„° 3h37m v1.14.1
node001 λ ˆλ”” 3h6m v1.14.1
node02 λ ˆλ”” 167m v1.14.1

방금 ν•΄λƒˆμŠ΅λ‹ˆλ‹€!

/etc/cni/net.dκ°€ λˆ„λ½λœ μž‘μ—…μžλ₯Ό μ—°κ²°ν•˜κΈ° 전에 λ„€νŠΈμ›Œν¬ ν”ŒλŸ¬κ·ΈμΈμ„ μƒμ„±ν•˜λŠ” λΉ„μŠ·ν•œ κ²½μš°κ°€ μžˆμ—ˆμŠ΅λ‹ˆλ‹€.
λ‹€μŒμ„ μ‚¬μš©ν•˜μ—¬ μž‘μ—…μž λ…Έλ“œλ₯Ό μ—°κ²°ν•œ ν›„ ꡬ성을 λ‹€μ‹œ μ‹€ν–‰ν–ˆμŠ΅λ‹ˆλ‹€.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
κ·Έ κ²°κ³Ό /etc/cni/net.d의 ꡬ성이 μ„±κ³΅μ μœΌλ‘œ μƒμ„±λ˜μ—ˆκ³  λ…Έλ“œκ°€ μ€€λΉ„ μƒνƒœλ‘œ ν‘œμ‹œλ˜μ—ˆμŠ΅λ‹ˆλ‹€.

같은 λ¬Έμ œκ°€ μžˆλŠ” μ‚¬λžŒμ—κ²Œ 도움이 되길 λ°”λžλ‹ˆλ‹€.

이 λͺ…령을 μ‹€ν–‰ν•˜λ©΄ λ‚΄ λ¬Έμ œκ°€ ν•΄κ²°λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

  1. kubectl 적용 -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

그러면 /etc/cni/net.d 디렉토리에 10-flannel.conflistλΌλŠ” μ΄λ¦„μ˜ 파일이 μƒμ„±λ©λ‹ˆλ‹€. λ‚˜λŠ” kubernetesκ°€ 이 νŒ¨ν‚€μ§€μ— μ˜ν•΄ μ„€μ •λ˜λŠ” λ„€νŠΈμ›Œν¬λ₯Ό ν•„μš”λ‘œ ν•œλ‹€κ³  μƒκ°ν•©λ‹ˆλ‹€.
λ‚΄ ν΄λŸ¬μŠ€ν„°λŠ” λ‹€μŒ μƒνƒœμ— μžˆμŠ΅λ‹ˆλ‹€.

이름 μƒνƒœ μ—­ν•  λ‚˜μ΄ 버전
k8s-master λ ˆλ”” λ§ˆμŠ€ν„° 3h37m v1.14.1
node001 λ ˆλ”” 3h6m v1.14.1
node02 λ ˆλ”” 167m v1.14.1

λ§ˆμŠ€ν„° λ¨Έμ‹ μ—μ„œ ν•΄λ‹Ή λͺ…령을 μ‹€ν–‰ν–ˆμœΌλ©° 이제 λͺ¨λ“  것이 μ€€λΉ„ μƒνƒœμ— μžˆμŠ΅λ‹ˆλ‹€. @saddique164 κ°μ‚¬ν•©λ‹ˆλ‹€.

κ°€μž₯ λΉ λ₯Έ 방법은 AMD64 μ•„ν‚€ν…μ²˜μ˜ Kubernetes에 Flannel을 μΆ”κ°€ν•˜λŠ” κ²ƒμž…λ‹ˆλ‹€.

1. kube-flannel.yaml μ—…λ°μ΄νŠΈ

$ curl https://rawgit.com/coreos/flannel/master/Documentation/kube-flannel.yml \
> kube-flannel.yaml

2. Flannel λ„€νŠΈμ›Œν¬ μ„€μ • μ™„λ£Œ

$ kubectl apply -f kube-flannel.yaml

μΏ λ²„λ„€ν‹°μŠ€ 1.18 버전을 μ‚¬μš©ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€.

λ‚˜λŠ” 이것을 μ‚¬μš©ν–ˆλ‹€: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

/etc/cni/net.d μ•„λž˜μ— μƒμ„±λœ 파일이 μ—†μŠ΅λ‹ˆλ‹€.
λ§ˆμŠ€ν„° λ…Έλ“œλŠ” NotReady이고 μŠ¬λ ˆμ΄λΈŒλŠ” Ready μƒνƒœμž…λ‹ˆλ‹€.

μΏ λ²„λ„€ν‹°μŠ€ 1.18 버전을 μ‚¬μš©ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€.

λ‚˜λŠ” 이것을 μ‚¬μš©ν–ˆλ‹€: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

/etc/cni/net.d μ•„λž˜μ— μƒμ„±λœ 파일이 μ—†μŠ΅λ‹ˆλ‹€.
λ§ˆμŠ€ν„° λ…Έλ“œλŠ” NotReady이고 μŠ¬λ ˆμ΄λΈŒλŠ” Ready μƒνƒœμž…λ‹ˆλ‹€.

  1. λ§ˆμŠ€ν„°μ—μ„œ kubectl λͺ…령을 μ‹€ν–‰ν•  수 μžˆμŠ΅λ‹ˆκΉŒ?
  2. kubelet이 λ§ˆμŠ€ν„°μ—μ„œ μ‹€ν–‰ 쀑인지 확인할 수 μžˆμŠ΅λ‹ˆκΉŒ? λ˜λŠ” λ‹€μŒμ„ μ‹€ν–‰ν•˜μ‹­μ‹œμ˜€: systemctl restart kubelet.
  3. kubelet이 μž¬μ‹œμž‘ μ€‘μ΄κ±°λ‚˜ μžλ™ μž¬μ‹œμž‘ 쀑이면 journal -u kubelet을 μ‹€ν–‰ν•˜κ³  둜그λ₯Ό ν™•μΈν•˜μ‹­μ‹œμ˜€. 였λ₯˜λ₯Ό 찾을 수 μžˆμŠ΅λ‹ˆλ‹€.

μ°Έκ³ : 이것은 kubelet 문제인 것 κ°™μŠ΅λ‹ˆλ‹€.

  1. 예, kubectl λͺ…령을 μ‹€ν–‰ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
  2. Kubectl이 μ‹€ν–‰λœ λ‹€μŒ μ‹€νŒ¨ν•©λ‹ˆλ‹€.
  3. 이것은 journactl -u kubelet의 였λ₯˜μ—μ„œ λ³Ό 수 μžˆλŠ” κ²ƒμž…λ‹ˆλ‹€.
Jul 01 11:58:36 master kubelet[17918]: F0701 11:58:36.613864   17918 kubelet.go:1383] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: failed to find subsystem mount for required subsystem: pids
Jul 01 11:58:36 master systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jul 01 11:58:36 master systemd[1]: Unit kubelet.service entered failed state.
Jul 01 11:58:36 master systemd[1]: kubelet.service failed.

λ§ˆμŠ€ν„°μ—μ„œ 이것을 μ‹œλ„ν•˜μ‹­μ‹œμ˜€.

sed -i '/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

μ‹œμž‘ν–ˆλ‹€κ°€ λ‹€μ‹œ μ‹€νŒ¨ν•©λ‹ˆλ‹€.

Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692341   15525 remote_runtime.go:59] parsed scheme: ""
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692358   15525 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692381   15525 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692389   15525 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692420   15525 remote_image.go:50] parsed scheme: ""
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692427   15525 remote_image.go:50] scheme "" not registered, fallback to default scheme
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692435   15525 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692440   15525 clientconn.go:933] ClientConn switching balancer to "pick_first"
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692464   15525 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
Jul 02 10:37:11 master kubelet[15525]: I0702 10:37:11.692480   15525 kubelet.go:317] Watching apiserver
Jul 02 10:37:16 master kubelet[15525]: W0702 10:37:16.680313   15525 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d

λ„€νŠΈμ›Œν¬λ₯Ό 찾을 수 μ—†λ‹€λŠ” λ©”μ‹œμ§€κ°€ ν‘œμ‹œλ˜λ©΄ 이 λͺ…령을 μ‹€ν–‰ν•˜κ³  κ²°κ³Όλ₯Ό κ³΅μœ ν•˜μ‹­μ‹œμ˜€.

Kubectl ν¬λ“œ κ°€μ Έμ˜€κΈ° -n kube-system

NAME                          READY   STATUS        RESTARTS   AGE
coredns-75f8564758-92ws7      1/1     Running       0          25h
coredns-75f8564758-z9xn8      1/1     Running       0          25h
kube-flannel-ds-amd64-2j4mw   1/1     Running       0          25h
kube-flannel-ds-amd64-5tmhp   0/1     Pending       0          25h
kube-flannel-ds-amd64-rqwmz   1/1     Running       0          25h
kube-proxy-6v24w              1/1     Running       0          25h
kube-proxy-jgdw7              0/1     Pending       0          25h
kube-proxy-qppnk              1/1     Running       0          25h

이것을 μ‹€ν–‰
kubectl은 kube-flannel-ds-amd64-5tmhp -n kube-system을 κΈ°λ‘ν•©λ‹ˆλ‹€.

아무 것도 μ˜€μ§€ μ•ŠμœΌλ©΄ λ‹€μŒμ„ μ‹€ν–‰ν•˜μ‹­μ‹œμ˜€.
kubectl은 pod kube-flannel-ds-amd64-5tmhp -n kube-system을 μ„€λͺ…ν•©λ‹ˆλ‹€.

μ„œλ²„ 였λ₯˜: Get https://10.75.214.124:10250/containerLogs/kube-system/kube-flannel-ds-amd64-5tmhp/kube-flannel : 닀이얼 tcp 10.75.214.124:10250: μ—°κ²°: μ—°κ²° κ±°λΆ€

μ–Όλ§ˆλ‚˜ λ§Žμ€ λ…Έλ“œκ°€ μ‹€ν–‰λ˜κ³  μžˆμŠ΅λ‹ˆκΉŒ? ν΄λŸ¬μŠ€ν„°μ—μ„œ? ν•œ λ…Έλ“œκ°€ 이 문제λ₯Ό ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€. 이것을 데λͺ¬μ…‹μ΄λΌκ³  ν•©λ‹ˆλ‹€. 그듀은 λͺ¨λ“  λ…Έλ“œμ—μ„œ μ‹€ν–‰λ©λ‹ˆλ‹€. μ œμ–΄ κ³„νšμ—μ„œ μš”μ²­μ„ μˆ˜λ½ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€. λ”°λΌμ„œ λ‹€μŒ 단계λ₯Ό λ”°λ₯΄λ„둝 μ œμ•ˆν•©λ‹ˆλ‹€.

  1. λ¨Όμ € μž‘μ—…μž λ…Έλ“œλ₯Ό ν•˜λ‚˜μ”© λ“œλ ˆμΈν•©λ‹ˆλ‹€.
    Kubectl 배수 λ…Έλ“œ 이름
  2. 그런 λ‹€μŒ μ‚­μ œν•˜μ‹­μ‹œμ˜€.
    kubectl λ…Έλ“œ λ…Έλ“œ 이름 μ‚­μ œ
  3. λ§ˆμŠ€ν„° λ…Έλ“œκ°€ 쀀비될 λ•ŒκΉŒμ§€ κΈ°λ‹€λ¦½λ‹ˆλ‹€. μ˜€μ§€ μ•ŠλŠ”λ‹€λ©΄. 이 λͺ…령을 μ‹€ν–‰ν•˜μ‹­μ‹œμ˜€.
    kubeadm μž¬μ„€μ •
  4. λ‹€μ‹œ kubeadm을 μ΄ˆκΈ°ν™”
    kubeadm μ΄ˆκΈ°ν™”
  5. 이 λͺ…령을 μ‹€ν–‰
    kubectl 적용 -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  6. λ§ˆμŠ€ν„°μ—μ„œ λͺ…령을 λ°›κ³  μž‘μ—…μž λ…Έλ“œμ—μ„œ μ‹€ν–‰ν•˜μ—¬ μ—°κ²°ν•˜μ‹­μ‹œμ˜€.

이 ν”„λ‘œμ„ΈμŠ€κ°€ μž‘λ™ν•©λ‹ˆλ‹€.

kubectl κ°€μ Έμ˜€κΈ° λ…Έλ“œ:

NAME     STATUS     ROLES    AGE   VERSION
master   NotReady   master   26h   v1.18.5
slave1   Ready      <none>   26h   v1.18.5
slave2   Ready      <none>   26h   v1.18.5

λ‚˜λŠ” 당신이 μ–ΈκΈ‰ ν•œ 단계λ₯Ό μ‹œλ„ν–ˆμŠ΅λ‹ˆλ‹€.

이것이 λ‚΄κ°€ μ–»λŠ” κ²ƒμž…λ‹ˆλ‹€.

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

λ§ˆμŠ€ν„°λ₯Ό μ œμ™Έν•œ λͺ¨λ“  λ…Έλ“œλ₯Ό λ“œλ ˆμΈν•˜κ³  그것에 μ§‘μ€‘ν•˜μ‹­μ‹œμ˜€. μ€€λΉ„λ˜λ©΄ λ‹€λ₯Έ μ‚¬λžŒμ„ μΆ”κ°€ν•˜μ‹­μ‹œμ˜€.

λ“œλ ˆμ΄λ‹ λ…Έλ“œμ™€ kubeadm reset 및 initλŠ” 도움이 λ˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€. ν΄λŸ¬μŠ€ν„°λŠ” λ‚˜μ€‘μ— μ΄ˆκΈ°ν™”λ˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€.

λ‚΄ λ¬Έμ œλŠ” ν΄λŸ¬μŠ€ν„°κ°€ μƒμ„±λœ ν›„ 호슀트 이름을 μ—…λ°μ΄νŠΈν–ˆλ‹€λŠ” κ²ƒμž…λ‹ˆλ‹€. κ·Έλ ‡κ²Œ ν•˜λ©΄ 주인이 주인인 쀄 λͺ°λžλ˜ 것과 κ°™λ‹€.

λ‚˜λŠ” 아직도 달리고 μžˆλ‹€:

sudo hostname $(curl 169.254.169.254/latest/meta-data/hostname)

ν•˜μ§€λ§Œ 이제 ν΄λŸ¬μŠ€ν„° μ΄ˆκΈ°ν™” 전에 μ‹€ν–‰ν•©λ‹ˆλ‹€.

이 νŽ˜μ΄μ§€κ°€ 도움이 λ˜μ—ˆλ‚˜μš”?
0 / 5 - 0 λ“±κΈ‰