åé¡ãæåºããŠããã ãããããšãããããŸãïŒ ãã¿ã³ãæŒãåã«ããããã®è³ªåã«çããŠãã ããã
ããã¯å©ããæ±ãããã®ã§ããïŒ çªå·
ãããæåºããåã«ãKubernetesã®åé¡ã§ã©ã®ããŒã¯ãŒããæ€çŽ¢ããŸãããïŒ ïŒéè€ãèŠã€ããå Žåã¯ã代ããã«ããã«è¿ä¿¡ããå¿ èŠããããŸããïŒïŒPLEG NotReady kubelet
ããã¯ãã°ã¬ããŒãã§ããããããšãæ©èœãªã¯ãšã¹ãã§ããïŒ ãã°
ããããã°ã¬ããŒãã®å Žåã¯ã次ã®ããšãè¡ã£ãŠãã ããã-以äžã®ãã³ãã¬ãŒããã§ããã ãå€ãèšå ¥ããŠãã ããã ããªããæ å ±ãçç¥ããå Žåãç§ãã¡ãããªããå©ããããšã¯ã§ããŸããã ãããæ©èœèŠæ±ã§ããå Žåã¯ã次ã®ããšãè¡ã£ãŠãã ããã-衚瀺ãããæ©èœ/åäœ/å€æŽã*詳现*ã«èª¬æããŠãã ããã ã©ã¡ãã®å Žåãããã©ããŒã¢ããã®è³ªåã«åããŠãã¿ã€ã ãªãŒã«åçããŠãã ããã ãã°ãåçŸã§ããªãå ŽåããŸãã¯æ©èœããã§ã«ååšãããšæãããå Žåã¯ãåé¡ã解決ããå¯èœæ§ããããŸãã ééã£ãŠããå Žåã¯ããæ°è»œã«å床éããŠçç±ã説æããŠãã ãããKubernetesããŒãžã§ã³ïŒ kubectl version
ïŒïŒ1.6.2
ç°å¢ïŒ
uname -a
ïŒïŒ4.9.24-coreosäœãèµ·ãã£ãã®ãïŒ
ç§ã¯3人ã®åŽåè
ã®ã¯ã©ã¹ã¿ãŒãæã£ãŠããŸãã 2ã€ãå Žåã«ãã£ãŠã¯3ã€ãã¹ãŠã®ããŒããNotReady
ãããããç¶ãã journalctl -u kubelet
次ã®ã¡ãã»ãŒãžã衚瀺ãããŸãã
May 05 13:59:56 ip-10-50-20-208.ec2.internal kubelet[2858]: I0505 13:59:56.872880 2858 kubelet_node_status.go:379] Recording NodeNotReady event message for node ip-10-50-20-208.ec2.internal
May 05 13:59:56 ip-10-50-20-208.ec2.internal kubelet[2858]: I0505 13:59:56.872908 2858 kubelet_node_status.go:682] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2017-05-05 13:59:56.872865742 +0000 UTC LastTransitionTime:2017-05-05 13:59:56.872865742 +0000 UTC Reason:KubeletNotReady Message:PLEG is not healthy: pleg was last seen active 3m7.629592089s ago; threshold is 3m0s}
May 05 14:07:57 ip-10-50-20-208.ec2.internal kubelet[2858]: I0505 14:07:57.598132 2858 kubelet_node_status.go:379] Recording NodeNotReady event message for node ip-10-50-20-208.ec2.internal
May 05 14:07:57 ip-10-50-20-208.ec2.internal kubelet[2858]: I0505 14:07:57.598162 2858 kubelet_node_status.go:682] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2017-05-05 14:07:57.598117026 +0000 UTC LastTransitionTime:2017-05-05 14:07:57.598117026 +0000 UTC Reason:KubeletNotReady Message:PLEG is not healthy: pleg was last seen active 3m7.346983738s ago; threshold is 3m0s}
May 05 14:17:58 ip-10-50-20-208.ec2.internal kubelet[2858]: I0505 14:17:58.536101 2858 kubelet_node_status.go:379] Recording NodeNotReady event message for node ip-10-50-20-208.ec2.internal
May 05 14:17:58 ip-10-50-20-208.ec2.internal kubelet[2858]: I0505 14:17:58.536134 2858 kubelet_node_status.go:682] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2017-05-05 14:17:58.536086605 +0000 UTC LastTransitionTime:2017-05-05 14:17:58.536086605 +0000 UTC Reason:KubeletNotReady Message:PLEG is not healthy: pleg was last seen active 3m7.275467289s ago; threshold is 3m0s}
May 05 14:29:59 ip-10-50-20-208.ec2.internal kubelet[2858]: I0505 14:29:59.648922 2858 kubelet_node_status.go:379] Recording NodeNotReady event message for node ip-10-50-20-208.ec2.internal
May 05 14:29:59 ip-10-50-20-208.ec2.internal kubelet[2858]: I0505 14:29:59.648952 2858 kubelet_node_status.go:682] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2017-05-05 14:29:59.648910669 +0000 UTC LastTransitionTime:2017-05-05 14:29:59.648910669 +0000 UTC Reason:KubeletNotReady Message:PLEG is not healthy: pleg was last seen active 3m7.377520804s ago; threshold is 3m0s}
May 05 14:44:00 ip-10-50-20-208.ec2.internal kubelet[2858]: I0505 14:44:00.938266 2858 kubelet_node_status.go:379] Recording NodeNotReady event message for node ip-10-50-20-208.ec2.internal
May 05 14:44:00 ip-10-50-20-208.ec2.internal kubelet[2858]: I0505 14:44:00.938297 2858 kubelet_node_status.go:682] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2017-05-05 14:44:00.938251338 +0000 UTC LastTransitionTime:2017-05-05 14:44:00.938251338 +0000 UTC Reason:KubeletNotReady Message:PLEG is not healthy: pleg was last seen active 3m7.654775919s ago; threshold is 3m0s}
dockerããŒã¢ã³ã¯åé¡ããdocker ps
ïŒããŒã«ã«docker images
ãªã©ã¯ãã¹ãŠæ©èœããããã«å¿çããŸãïŒã
kubectl apply -f https://git.io/weave-kube-1.6
ä»ããŠã€ã³ã¹ããŒã«ããããŠã£ãŒããããã¯ãŒã¯ã䜿çšãã
ããªããèµ·ãããšæåŸ ããããšïŒ
æºåãã§ããŠããããŒãã
ãããåçŸããæ¹æ³ïŒå¯èœãªéãæå°éãã€æ£ç¢ºã«ïŒïŒ
æ¹æ³ãç¥ã£ãŠãããããã®ã«ïŒ
ç§ãã¡ãç¥ãå¿ èŠãããä»ã®ããšïŒ
ã€ã³ã¿ãŒããããžã®NATã²ãŒããŠã§ã€ãåããåããã©ã€ããŒããµããããäžã®ãã¹ãŠã®ããŒãïŒã¯ãŒã«ãŒãšãã¹ã¿ãŒïŒã ãã¹ã¿ãŒã»ãã¥ãªãã£ã°ã«ãŒãããã®ç¡å¶éã®ã¢ã¯ã»ã¹ïŒãã¹ãŠã®ããŒãïŒãèš±å¯ããã»ãã¥ãªãã£ã°ã«ãŒãã®ã¯ãŒã«ãŒã ãã¹ã¿ãŒã¯ãåããµããããããã®ãã¹ãŠã®ããŒããèš±å¯ããŸãã ãããã·ã¯ã¯ãŒã«ãŒã§å®è¡ãããŠããŸãã apiserverãã³ã³ãããŒã©ãŒãããŒãžã£ãŒããã¹ã¿ãŒã®ã¹ã±ãžã¥ãŒã©ãŒã
kubectl logs
ãškubectl exec
ã¯ããã¹ã¿ãŒèªäœããïŒãŸãã¯å€éšããïŒå®è¡ããå Žåã§ããåžžã«ãã³ã°ããŸãã
@deitch ãããŒãã§å®è¡ãããŠããã³ã³ããã®æ°ã¯ïŒ ããŒãã®å šäœçãªCPU䜿çšçã¯ã©ããããã§ããïŒ
åºæ¬çã«ãªãã kube-dnsãweave-netãweave-npcãããã³3ã€ã®ãã³ãã¬ãŒããµã³ãã«ãµãŒãã¹ã 2ã€ã«ã¯ç»åããªããã¯ãªãŒã³ã¢ãããããäºå®ã ã£ããããå®éã«ã¯1ã€ã ãã§ãã AWSm4.2xlargeã ãªãœãŒã¹ã®åé¡ã§ã¯ãããŸããã
æçµçã«ããŒããç Žæ£ããŠåäœæããå¿
èŠããããŸããã ç Žæ£/åäœæããŠããPLEGã¡ãã»ãŒãžã¯ãªãã50ïŒ
åé¡ãªãããã§ãã 圌ãã¯Ready
ã§ãããããã§ãkubectl exec
ãŸãã¯kubectl logs
ãèš±å¯ããããšãæåŠããŸãã
PLEGãå®éã«äœã§ãããã«ã€ããŠã®ããã¥ã¡ã³ããèŠã€ããã®ã«æ¬åœã«èŠåŽããŸãããããã£ãšéèŠãªã®ã¯ãããèªäœã®ãã°ãšç¶æ ããã§ãã¯ããŠãããã°ããæ¹æ³ã§ãã
ããŒã...è¬ã«è¿œå ããããã«ãã©ã®ã³ã³ããããã¹ãåã解決ã§ããŸããããããŠkubednsã¯ä»¥äžãäžããŸãïŒ
E0505 17:30:49.412272 1 reflector.go:199] pkg/dns/config/sync.go:114: Failed to list *api.ConfigMap: Get https://10.200.0.1:443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-dns&resourceVersion=0: dial tcp 10.200.0.1:443: getsockopt: no route to host
E0505 17:30:49.412285 1 reflector.go:199] pkg/dns/dns.go:148: Failed to list *api.Service: Get https://10.200.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.200.0.1:443: getsockopt: no route to host
E0505 17:30:49.412272 1 reflector.go:199] pkg/dns/dns.go:145: Failed to list *api.Endpoints: Get https://10.200.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.200.0.1:443: getsockopt: no route to host
I0505 17:30:51.855370 1 logs.go:41] skydns: failure to forward request "read udp 10.100.0.3:60364->10.50.0.2:53: i/o timeout"
FWIWã 10.200.0.1
ã¯å
éšã®kube apiãµãŒãã¹ã 10.200.0.5
ã¯DNSã 10.50.20.0/24
ãš10.50.21.0/24
ã¯ãã¹ã¿ãŒãšã¯ãŒã«ãŒãååšãããµããããïŒ2ã€ã®å¥ã
ã®AZïŒã§ãå®è¡ããŸãã
ãããã¯ãŒãã³ã°ã§æ¬åœã«fubarãªãã®ã¯ãããŸããïŒ
ãããã¯ãŒãã³ã°ã§æ¬åœã«fubarãªãã®ã¯ãããŸããïŒ
@bborehamã¯ã https://github.com/weaveworks/weave/issues/2736ã§èª¬æãããŠããããã«ã IPALLOC_RANGE=10.100.0.0/16
è¿œå ãããæšæºã®ç¹ãæ¹
@deitch plegã¯ãkubeletãããŒãå ã®ããããå®æçã«äžèŠ§è¡šç€ºããŠãæ£åžžæ§ã確èªãããã£ãã·ã¥ãæŽæ°ããããã®ãã®ã§ãã plegã¿ã€ã ã¢ãŠããã°ã衚瀺ãããå Žåã¯ãDNSã«é¢é£ããŠããªãå¯èœæ§ããããŸãããkubeletã®dockerãžã®åŒã³åºããã¿ã€ã ã¢ãŠãã§ããããã§ãã
ããããšã@ qiujian16 ã åé¡ã¯è§£æ¶ãããããã§ããã確èªæ¹æ³ãããããŸããã Dockerèªäœã¯æ£åžžã«èŠããŸããã ããããããã¯ãŒãã³ã°ãã©ã°ã€ã³ã§ããå¯èœæ§ããããã©ããçåã«æããŸããããããã¯kubeletèªäœã«åœ±é¿ãäžããã¹ãã§ã¯ãããŸããã
ããã§ããã¹ãã®å¥åº·ç¶æ ãšç¶æ ã確èªããããã®ãã³ããæããŠãã ããã ãã®åŸãåé¡ãåçºãããŸã§ãããéããããšãã§ããŸãã
@deitch plegã¯ããããã©ã€ããµã€ã¯ã«ã€ãã³ããžã§ãã¬ãŒã¿ãã®ç¥ã§ãkubeletã®å éšã³ã³ããŒãã³ãã§ããããã®ã¹ããŒã¿ã¹ãçŽæ¥ç¢ºèªã§ãããšã¯æããŸãããïŒhttps://github.com/kubernetes/community/blob/master /contributors/design-proposals/pod-lifecycle-event-generator.mdïŒ
kubeletãã€ããªã®å éšã¢ãžã¥ãŒã«ã§ããïŒ ããã¯å¥ã®ã¹ã¿ã³ãã¢ãã³ã³ã³ããïŒdockerãruncãcotnainerdïŒã§ããïŒ ã¹ã¿ã³ãã¢ãã³ã®ãã€ããªã§ããïŒ
åºæ¬çã«ãkubeletãPLEGãšã©ãŒãå ±åããå Žåããããã®ãšã©ãŒãäœã§ãããã調ã¹ãŠããã®ã¹ããŒã¿ã¹ã確èªããè©Šè¡ããŠè€è£œããããšã¯éåžžã«åœ¹ç«ã¡ãŸãã
ããã¯å éšã¢ãžã¥ãŒã«ã§ã
@deitchã¯ãDockerã®å¿çæ§ãäœãå ŽåããããPLEGããããå€ãéããå¯èœæ§ããããŸãã
ãã¹ãŠã®ããŒãã§åæ§ã®åé¡ãçºçããŠããŸãããäœæããã°ããã®ã¯ã©ã¹ã¿ãŒã1ã€ãããŸãã
ãã°ïŒ
kube-worker03.foo.bar.com kubelet[3213]: E0511 19:00:59.139374 3213 remote_runtime.go:109] StopPodSandbox "12c6a5c6833a190f531797ee26abe06297678820385b402371e196c69b67a136" from runtime service failed: rpc error: code = 4 desc = context deadline exceeded
May 11 19:00:59 kube-worker03.foo.bar.com kubelet[3213]: E0511 19:00:59.139401 3213 kuberuntime_gc.go:138] Failed to stop sandbox "12c6a5c6833a190f531797ee26abe06297678820385b402371e196c69b67a136" before removing: rpc error: code = 4 desc = context deadline exceeded
May 11 19:01:04 kube-worker03.foo.bar.com kubelet[3213]: E0511 19:01:04.627954 3213 pod_workers.go:182] Error syncing pod 1c43d9b6-3672-11e7-a6da-00163e041106
("kube-dns-4240821577-1wswn_kube-system(1c43d9b6-3672-11e7-a6da-00163e041106)"), skipping: rpc error: code = 4 desc = context deadline exceeded
May 11 19:01:18 kube-worker03.foo.bar.com kubelet[3213]: E0511 19:01:18.627819 3213 pod_workers.go:182] Error syncing pod 1c43d9b6-3672-11e7-a6da-00163e041106
("kube-dns-4240821577-1wswn_kube-system(1c43d9b6-3672-11e7-a6da-00163e041106)"),
skipping: rpc error: code = 4 desc = context deadline exceeded
May 11 19:01:21 kube-worker03.foo.bar.com kubelet[3213]: I0511 19:01:21.627670 3213 kubelet.go:1752] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m0.339074625s ago; threshold is 3m0s]
DockerãããŠã³ã°ã¬ãŒãããäºå®äžãã¹ãŠãåèµ·åããŠãç¡é§ã«ãªããŸãããããŒãã¯ãã¹ãŠpuppetãä»ããŠç®¡çãããŠãããããå®å šã«åäžã§ãããšæåŸ ããŠããŸããäœãåé¡ãªã®ãããããŸããã ãããã°ã¢ãŒãã®Dockerãã°ã¯ããããã®ãªã¯ãšã¹ããååŸããŠããããšã瀺ããŠããŸã
@bjhaidãããã¯ãŒãã³ã°ã«äœã䜿çšããŠããŸããïŒ åœæãç§ã¯ããã€ãã®èå³æ·±ããããã¯ãŒã¯ã®åé¡ãèŠãŠããŸããã
@deitch weaveã§ãããkubeletãšdockerã®éã®éä¿¡ã®åé¡ã®ããã§ãããããããã¯ãããã¯ãŒã¯é¢é£ã®åé¡ã§ã¯ãªããšæããŸãã dockerã®ãããã°ãã°ãä»ããŠãdockerãkubeletãããããã®ãªã¯ãšã¹ããååŸããŠããããšã確èªã§ããŸã
ç§ã®Plegã®åé¡ã¯ãªããªã£ãããã«èŠããŸããã次ã«ãããã®ã¯ã©ã¹ã¿ãŒãæ°ãã«ã»ããã¢ãããããŸã§ïŒãã¹ãŠç§ãæ§ç¯ãããã©ãã©ãŒã ã¢ãžã¥ãŒã«ãä»ããŠïŒèªä¿¡ãæãŠãŸããã
ç¹ãã®åé¡ãååšããããk8s / dockerã®å¯èœæ§ããããŸãã
@deitch Plegã®åé¡ã解決ããããã«äœãããŸãããããããšãéæ³ãèµ·ãããŸãããïŒ
å®éã«ã¯ãã¹ãåã®è§£æ±ºã§ããã³ã³ãããŒã©ãŒã¯æ°ããäœæãããããŒãã®ãã¹ãåã解決ã§ããŸããã§ããããã€ãºãçºçããŠç³ãèš³ãããŸããã
ç§ã¯åé¡ããªãããšãããã«å ±åããŸãããåé¡ã¯ãŸã ååšããŸããç§ã¯äœããèŠã€ãããæ¢ãç¶ããŠå ±åããŸã
ãã®åé¡ã¯weave-kube
ã«é¢é£ããŠãããšæããŸããåãåé¡ãçºçããŸãããä»åã¯ãã¯ã©ã¹ã¿ãŒãåäœæããã«åé¡ã解決ããããã«ãç¹ããåé€ããŠåé©çšããå¿
èŠããããŸããïŒäŒæããããã«ããŒããåèµ·åããŸãïŒãåé€é åºïŒ...ãããŠæ»ã£ãŠããŸãã
ã ããç§ã¯ãããweave-kube-1.6
ã«ãããã®ã§ãããšç¢ºä¿¡ããŠããçç±ãæ¹æ³ãããããŸãã
ããã«æ»ãã®ãå¿ããŸãããåé¡ã¯ããŠã£ãŒãã€ã³ã¿ãŒãã§ã€ã¹ãèµ·åããªããããã³ã³ãããŒã«ãããã¯ãŒã¯ããªãããšãåå ã§ããããã ããããã¯ããã¡ã€ã¢ãŠã©ãŒã«ããŠã£ãŒãããŒã¿ãšvxlanããŒãããããã¯ããŠããããã§ããã®ããŒããéããšåé¡ã¯ãããŸããã§ããã
ç§ãæ±ããŠããåé¡ã¯2ã€ãããããããé¢é£ããŠããŸããã
äžå¯©ãªããšã«ãplegã®ãã¹ãŠã®åé¡ã¯ããŠã£ãŒããããã¯ãŒã¯ã®åé¡ãšåæã«çºçããŸããã
Bryan @ weaveworksã¯ãcoreosã®åé¡ãææããŠãããŸããã CoreOSã¯ãããªããžããã¹ãåºæ¬çã«ãã¹ãŠã管çããããšããããªãç©æ¥µçãªåŸåããããŸãã lo
ãšå®éã«ã¯ãã¹ãäžã®ç©çã€ã³ã¿ãŒãã§ã€ã¹ãé€ããŠãCoreOSããããå®è¡ã§ããªãããã«ãããšããã¹ãŠã®åé¡ãæ®ããŸããã
人ã ã¯ãŸã coreosã®å®è¡ã«åé¡ãæ±ããŠããŸããïŒ
ç§ãã¡ã¯å æããããã§ãããã®åé¡ã«æ©ãŸãããŠããŸããïŒã¯ã©ã¹ã¿ãŒã1.5.xãã1.6.xã«ã¢ããã°ã¬ãŒãããåŸã«èšãããã§ãïŒãããŠããã¯åãããã«äžæè°ã§ãã
ç§ãã¡ã¯awsã§weaveãdebian jessie AMIãå®è¡ããŠãããã¯ã©ã¹ã¿ãŒã¯PLEGãæ£åžžã§ã¯ãªããšå€æããããšããããŸãã
ãã®å Žåããããã¯ãã€ã³ãã䜿çšããŠæ£åžžã«èµ·åããŠãããããç¹ãã¯åé¡ãªãããã§ãã
ç§ãã¡ãææããããšã®1ã€ã¯ããã¹ãŠã®ã¬ããªã«ãçž®å°ãããšåé¡ã¯è§£æ±ºããããã«èŠããããšã§ãããå±éãšã¹ããŒããã«ã»ããã®æ¡å€§ãéå§ãããšãç¹å®ã®æ°ã®ã³ã³ãããŒã®åšãã§ãããçºçããŸãã ïŒå°ãªããšãä»åã¯ïŒã
docker ps; Dockeræ
å ±ã¯ããŒãäžã§åé¡ãªãããã§ãã
ãªãœãŒã¹äœ¿çšçã¯ãããã§ãïŒ5ïŒ
cpu utilã1.5 / 8gbã®RAMã䜿çšããïŒroot htopã«ããïŒãããŒããªãœãŒã¹ããããžã§ãã³ã°ã®åèšã¯çŽ30ïŒ
ã§ãããã¹ã±ãžã¥ãŒã«ãããŠããã¯ãã®ãã¹ãŠã®ãã®ãã¹ã±ãžã¥ãŒã«ãããŠããŸãã
ããã«ã€ããŠã¯ãŸã£ããé ãæ©ãŸããããšã¯ã§ããŸããã
PLEGãã§ãã¯ãããå°ãåé·ã«ãªã£ãŠããããšãå¿ããé¡ã£ãŠããŸããããŒãé³ãäœãããŠããã®ãã«ã€ããŠãå®éã«è©³çŽ°ãªããã¥ã¡ã³ãããããŸãããããã«ã€ããŠã¯ãèšå€§ãªæ°ã®åé¡ãæªè§£æ±ºã§ããããã«æããã誰ããããäœã§ããããå®éã«ã¯ç¥ããªãããã§ããéèŠãªã¢ãžã¥ãŒã«ã§ãã倱æãããšèŠãªããããã§ãã¯ãåçŸã§ããããã«ããããšæããŸãã
ç§ã¯ãã¹ãã®ç¥ç§æ§ã«ã€ããŠã®èãã2çªç®ã«ããŠããŸãã ããããç§ã®åŽã§ã¯ãã¯ã©ã€ã¢ã³ãã®ããã«å€ãã®äœæ¥ãè¡ã£ãåŸãcoreosãšãã®ãããã¯ãŒã¯ã§ã®èª€åäœãå®å®ãããããšã倧ãã«åœ¹ç«ã¡ãŸããã
PLEGãã«ã¹ãã§ãã¯ã¯ã»ãšãã©è¡ããŸããã ãã¹ãŠã®å埩ã§ã docker ps
ãåŒã³åºããŠã³ã³ãããŒã®ç¶æ
ã®å€åãæ€åºãã docker ps
ãšinspect
ãåŒã³åºããŠãããã®ã³ã³ãããŒã®è©³çŽ°ãååŸããŸãã
åå埩ãçµäºãããšãã¿ã€ã ã¹ã¿ã³ããæŽæ°ãããŸãã ã¿ã€ã ã¹ã¿ã³ãããã°ããïŒã€ãŸã3åéïŒæŽæ°ãããŠããªãå Žåããã«ã¹ãã§ãã¯ã¯å€±æããŸãã
PLEGã3åã§ããããã¹ãŠãå®äºã§ããªãèšå€§ãªæ°ã®ããããããŒãã«ããŒããããŠããªãéãïŒããã¯çºçããªãã¯ãã§ãïŒãæãå¯èœæ§ã®é«ãåå ã¯Dockerãé
ãããšã§ãã ããŸã«docker ps
å°åæã§ããã芳å¯ã§ããªããããããŸããããããã¯ããããªããšããæå³ã§ã¯ãããŸããã
ãäžå¥åº·ãã¹ããŒã¿ã¹ãå ¬éããªããšããŠãŒã¶ãŒããå€ãã®åé¡ãé ãããããã«å€ãã®åé¡ãçºçããå¯èœæ§ããããŸãã ããšãã°ãkubeletã¯å€æŽã«ã¿ã€ã ãªãŒã«åå¿ãããããã«æ··ä¹±ãæããŸãã
ããããããããã°å¯èœã«ããæ¹æ³ã«é¢ããææ¡ãæè¿ããŸã...
PLEGã®äžå¥åº·ãªèŠåãçºçããããŒãã®ãã«ã¹ã¹ããŒã¿ã¹ããã©ããã³ã°ããŸãïŒk8s 1.6.4 withweaveã ïŒãã以å€ã¯åäžã®ïŒããŒãã®ãµãã»ããã«ã®ã¿è¡šç€ºãããŸãã
ç§ãã¡ã®å ŽåãContainerCreatingã§ã¹ã¿ãã¯ããŠãããã©ããã³ã°ã¯ãŒã«ãŒãšãããã¯ããã¹ã¿ãŒãšã¯ãŒã«ãŒéãããã³ã¯ãŒã«ãŒéã®ãŠã£ãŒããã©ãã£ãã¯ãèš±å¯ããªãEC2ã€ã³ã¹ã¿ã³ã¹ã®ã»ãã¥ãªãã£ã°ã«ãŒãã®åé¡ã§ããã ãã®ãããããŒããæ£ããèµ·åã§ãããNotReadyã§ã¹ã¿ãã¯ããŸããã
kuberrnetes 1.6.4
é©åãªã»ãã¥ãªãã£ã°ã«ãŒããããã°ãä»ã¯æ©èœããŸãã
ç§ã¯ãã®èšå®ã§ãã®åé¡ã®ãããªãã®ãçµéšããŠããŸã...
KubernetesããŒãžã§ã³ïŒkubectlããŒãžã§ã³ã䜿çšïŒïŒ1.6.4
ç°å¢ïŒ
ã¯ã©ãŠããããã€ããŒãŸãã¯ããŒããŠã§ã¢æ§æïŒåäžã®System76ãµãŒããŒ
OSïŒäŸïŒ/ etc / os-releaseããïŒïŒUbuntu 16.04.2 LTS
ã«ãŒãã«ïŒäŸïŒuname -aïŒïŒLinux system76-server 4.4.0-78-genericïŒ99-Ubuntu SMP Thu Apr 27 15:29:09 UTC 2017 x86_64 x86_64 x86_64 GNU / Linux
ããŒã«ã®ã€ã³ã¹ããŒã«ïŒkubeadm + weave.works
ããã¯åäžããŒãã®ã¯ã©ã¹ã¿ãŒã§ããããããã®åé¡ã®ç§ã®ããŒãžã§ã³ã¯ã»ãã¥ãªãã£ã°ã«ãŒãããã¡ã€ã¢ãŠã©ãŒã«ã«é¢é£ããŠãããšã¯æããŸããã
ã¯ã©ã¹ã¿ãèµ·åããã°ããã®å Žåã¯ãã»ãã¥ãªãã£ã°ã«ãŒãã®åé¡ã¯çã«ããªã£ãŠããŸãã ããããç§ãã¡ãç®ã«ããŠãããããã®åé¡ã¯ãã»ãã¥ãªãã£ã°ã«ãŒããé 眮ãããç¶æ ã§æ°ãæéå®è¡ãããŠããã¯ã©ã¹ã¿ãŒã«ãããŸãã
GKEã§kubeletããŒãžã§ã³1.6.2ãå®è¡ããŠãããšãã«ã䌌ããããªããšãèµ·ãããŸããã
ããŒãã®1ã€ãæºåå®äºç¶æ ã«ç§»è¡ãããã®ããŒãã®kubeletãã°ã«2ã€ã®èŠæ ããããŸããã1ã€ã¯PLEGã¹ããŒã¿ã¹ãã§ãã¯ã倱æããããšããã2ã€ã¯èå³æ·±ãããšã«ç»åãªã¹ãæäœã倱æããããšã§ãã
ç»åé¢æ°ã®åŒã³åºãã倱æããããã€ãã®äŸã
image_gc_manager.goïŒ176
kuberuntime_image.goïŒ106
remote_image.goïŒ61
ç§ãæ³å®ããŠããã®ã¯ãdockerããŒã¢ã³ã®åŒã³åºãã§ãã
ãããèµ·ãã£ãŠãããšããç§ã¯ãã£ã¹ã¯IOã¹ãã€ã¯ãç¹ã«èªã¿åãæäœãããããèŠãŸããã ã50kb / sããŒã¯ãã8mb / sããŒã¯ãŸã§ã
çŽ30ã45ååŸã«èªåçã«ä¿®æ£ãããŸããããIOã®å¢å ãåŒãèµ·ãããã®ã¯ç»åGCã¹ã€ãŒãã ã£ãã®ã§ããããã
ãã§ã«è¿°ã¹ãããã«ãPLEGã¯dockerããŒã¢ã³ãä»ããŠããããç£èŠããŸãããããå€ãã®æäœãå®è¡ããŠããå ŽåãPLEGãã§ãã¯ããã¥ãŒã«å ¥ããããšãã§ããŸããïŒ
1.6.4ããã³1.6.6ïŒGKEäžïŒã§ãã®åé¡ãçºçããçµæãšããŠNotReadyããã©ããã³ã°ããŸãã ããã¯GKEã§å©çšå¯èœãªææ°ããŒãžã§ã³ã§ãããããä¿®æ£ã次ã®1.6ãªãªãŒã¹ã«ããã¯ããŒãããŠããããããšæããŸãã
èå³æ·±ãç¹ã®1ã€ã¯ãPLEGãæåŸã«ã¢ã¯ãã£ãã§ãããšèŠãªãããæå»ã¯å€æŽããããåžžã«_巚倧ãªæ°å€ã§ãããšããããšã§ãïŒãããããæ ŒçŽãããŠããã¿ã€ãã®å¶éã«ãããŸãïŒã
[container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
[ã³ã³ããã®ã©ã³ã¿ã€ã ãããŠã³ããŠããŸãPLEGã¯æ£åžžã§ã¯ãããŸããïŒplegã¯2562047h47m16.854775807såã«ã¢ã¯ãã£ãã§ããããšãæåŸã«ç¢ºèªãããŸããã ãããå€ã¯3m0sã§ã]
@bergmanç§ã¯ãããèŠãããšããããŸãããããããããªããããªãã®ããŒãã¯æ±ºããŠæºåãã§ããŠããªãã£ãã ããã GKEããŒã ãããã«èª¿æ»ã§ããããã«ãGKEãã£ãã«ãéããŠãããå ±åããŠãã ããã
çŽ30ã45ååŸã«èªåçã«ä¿®æ£ãããŸããããIOã®å¢å ãåŒãèµ·ãããã®ã¯ç»åGCã¹ã€ãŒãã ã£ãã®ã§ããããã
ããã¯ç¢ºãã«å¯èœã§ãã Image GCã«ãããdockerããŒã¢ã³ã®å¿çãéåžžã«é ããªãããšããããŸããã 30ã45åã¯ããªãé·ãèãããŸãã @zoltrainã¯ãå šæéãéããŠç»åãåé€ãããŠããŸããã
åã®ã¹ããŒãã¡ã³ããç¹°ãè¿ããŸãããPLEGã¯ã»ãšãã©äœããããdockerããŒã¢ã³ãå¿çããªãããããã«ã¹ãã§ãã¯ã«å€±æããã ãã§ãã PLEGãã«ã¹ãã§ãã¯ãéããŠãã®æ å ±ã衚瀺ããããŒããã³ã³ããçµ±èšãååŸããŠããªãïŒããã³ãããã«åå¿ããŠããªãïŒããšãã³ã³ãããŒã«ãã¬ãŒã³ã«éç¥ããŸãã ãã®ãã§ãã¯ãç²ç®çã«åé€ãããšãããæ·±å»ãªåé¡ãé ãããå¯èœæ§ããããŸãã
æŽæ°ããã«ã¯ïŒãŠã£ãŒããšIPã¹ã©ã€ã¹ã®ããããžã§ãã³ã°ã«é¢é£ããåé¡ãç§ãã¡ã®åŽã§èŠã€ãããŸããã AWSã§ããŒããé »ç¹ã«çµäºãããããweaveã¯å ã ãã¯ã©ã¹ã¿ãŒå ã®ããŒãã®æ°žç¶çãªç Žå£ãèæ ®ããŠããŸããã§ããããã®åŸãæ°ããIPãç¶ããŸãã ãã®çµæããããã¯ãŒã¯ãæ£ããã»ããã¢ãããããªããããå éšç¯å²ã«é¢ä¿ãããã®ã¯ãã¹ãŠæ£ããèµ·åããŸããã§ããã
https://github.com/weaveworks/weave/issues/2970
ç¹ãã䜿çšãã人ã®ããã«ã
[ã³ã³ããã®ã©ã³ã¿ã€ã ãããŠã³ããŠããŸãPLEGã¯æ£åžžã§ã¯ãããŸããïŒplegã¯2562047h47m16.854775807såã«ã¢ã¯ãã£ãã§ããããšãæåŸã«ç¢ºèªãããŸããã ãããå€ã¯3m0sã§ã]
@bergmanç§ã¯ãããèŠãããšããããŸãããããããããªããããªãã®ããŒãã¯æ±ºããŠæºåãã§ããŠããªãã£ãã ããã GKEããŒã ãããã«èª¿æ»ã§ããããã«ãGKEãã£ãã«ãéããŠãããå ±åããŠãã ããã
ã»ãšãã©ã®å ŽåãããŒãã¯æºåå®äºã§ãã ãã®ãã§ãã¯ãåå ã§kubeletãåèµ·åãããããä»ã®ãã§ãã¯ãReadyã€ãã³ããéç¥ããŠãããšæããŸãã 60ç§ããšã«çŽ10ç§ã®NotReadyã衚瀺ãããŸãã æ®ãã®æéãããŒãã¯æºåå®äºã§ãã
@yujuhong PLEG is not healthy
ã¯ãšã³ããŠãŒã¶ãŒã«ãšã£ãŠéåžžã«æ··ä¹±ããã³ã³ããã©ã³ã¿ã€ã ã倱æããçç±ããã³ã³ããã©ã³ã¿ã€ã ã«é¢ãã詳现ãªã©ãåé¡ã®èšºæã«ã¯åœ¹ç«ããªããšèšã£ãŠãPLEGãã°ãæ¹åã§ãããšæããŸããå¿çããæ¹ã䟿å©ã§ã
矜ã°ããã¯èŠãããŸãããã1.6.4ãšäžæ¯ç«ãããã³ãç¹ã蟌ãŸããŠããªãããŒãã®ç¶æ ã¯åžžã«æºåãã§ããŠããŸããã
@yujuhong PLEGã®ãã°ã¯æ¹åã§ãããšæããŸããPLEGãæ£åžžã§ãªãããšã¯ãšã³ããŠãŒã¶ãŒã«ãšã£ãŠéåžžã«æ··ä¹±ããã³ã³ããã©ã³ã¿ã€ã ã倱æããçç±ããã³ã³ããã©ã³ã¿ã€ã ãå¿çããªãããšãªã©ã®åé¡ã®èšºæã«ã¯åœ¹ç«ã¡ãŸããããã䟿å©ã«ãªã
æ¿ç¥ããŸããã æ°è»œã«PRãéã£ãŠãã ããã
Dockerã€ã¡ãŒãžã®ã¯ãªãŒã³ã¢ããäžã«ãã®åé¡ãçºçããŠããŸããã Dockerã¯å¿ãããããšæããŸãã ç»åãåé€ããããšãéåžžã®ç¶æ ã«æ»ããŸãã
åãåé¡ãçºçããŸããã ãã®çç±ã¯ãntpdãçŸåšã®æå»ãä¿®æ£ããŠããããã ãšæããŸãã
v1.6.9ã§ntpdã®æ£ããæå»ãèŠãŠããŸãã
Sep 12 19:05:08 node-6 systemd: Started logagt.
Sep 12 19:05:08 node-6 systemd: Starting logagt...
Sep 12 19:05:09 node-6 cnrm: "Log":"2017-09-12 19:05:09.197083#011ERROR#011node-6#011knitter.cnrm.mod-init#011TransactionID=1#011InstanceID=1174#011[ObjectType=null,ObjectID=null]#011registerOir: k8s.GetK8sClientSingleton().RegisterOir(oirName: hugepage, qty: 2048) FAIL, error: dial tcp 120.0.0.250:8080: getsockopt: no route to host, retry#011[init.go]#011[68]"
Sep 12 11:04:53 node-6 ntpd[902]: 0.0.0.0 c61c 0c clock_step -28818.771869 s
Sep 12 11:04:53 node-6 ntpd[902]: 0.0.0.0 c614 04 freq_mode
Sep 12 11:04:53 node-6 systemd: Time has been changed
Sep 12 11:04:54 node-6 ntpd[902]: 0.0.0.0 c618 08 no_sys_peer
Sep 12 11:05:04 node-6 systemd: Reloading.
Sep 12 11:05:04 node-6 systemd: Configuration file /usr/lib/systemd/system/auditd.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
Sep 12 11:05:04 node-6 systemd: Started opslet.
Sep 12 11:05:04 node-6 systemd: Starting opslet...
Sep 12 11:05:13 node-6 systemd: Reloading.
Sep 12 11:05:22 node-6 kubelet: E0912 11:05:22.425676 2429 event.go:259] Could not construct reference to: '&v1.Node{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"120.0.0.251", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"beta.kubernetes.io/os":"linux", "beta.kubernetes.io/arch":"amd64", "kubernetes.io/hostname":"120.0.0.251"}, Annotations:map[string]string{"volumes.kubernetes.io/controller-managed-attach-detach":"true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.NodeSpec{PodCIDR:"", ExternalID:"120.0.0.251", ProviderID:"", Unschedulable:false, Taints:[]v1.Taint(nil)}, Status:v1.NodeStatus{Capacity:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:4000, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, l:[]int64(nil), s:"", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:3974811648, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, l:[]int64(nil), s:"", Format:"BinarySI"}, "hugePages":resource.Quantity{i:resource.int64Amount{value:1024, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, l:[]int64(nil), s:"", Format:"DecimalSI"}, "pods":resource.Quantity{i:resource.int64Amount{value:110, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, l:[]int64(nil), s:"", Format:"DecimalSI"}}, Allocatable:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:3500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, l:[]int64(nil), s:"", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:1345666048, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, l:[]int64(nil), s:"", Format:"BinarySI"}, "hugePages":resource.Quantity{i:resource.int64Amount{value:1024, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, l:[]int64(nil), s:"",
Sep 12 11:05:22 node-6 kubelet: Format:"DecimalSI"}, "pods":resource.Quantity{i:resource.int64Amount{value:110, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, l:[]int64(nil), s:"", Format:"DecimalSI"}}, Phase:"", Conditions:[]v1.NodeCondition{v1.NodeCondition{Type:"OutOfDisk", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63640811081, nsec:196025689, loc:(*time.Location)(0x4e8e3a0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63640811081, nsec:196025689, loc:(*time.Location)(0x4e8e3a0)}}, Reason:"KubeletHasSufficientDisk", Message:"kubelet has sufficient disk space available"}, v1.NodeCondition{Type:"MemoryPressure", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63640811081, nsec:196099492, loc:(*time.Location)(0x4e8e3a0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63640811081, nsec:196099492, loc:(*time.Location)(0x4e8e3a0)}}, Reason:"KubeletHasSufficientMemory", Message:"kubelet has sufficient memory available"}, v1.NodeCondition{Type:"DiskPressure", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63640811081, nsec:196107935, loc:(*time.Location)(0x4e8e3a0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63640811081, nsec:196107935, loc:(*time.Location)(0x4e8e3a0)}}, Reason:"KubeletHasNoDiskPressure", Message:"kubelet has no disk pressure"}, v1.NodeCondition{Type:"Ready", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63640811081, nsec:196114314, loc:(*time.Location)(0x4e8e3a0)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63640811081, nsec:196114314, loc:(*time.Location)(0x4e8e3a0)}}, Reason:"KubeletNotReady", Message:"container runtime is down,PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s,network state unknown"}}, Addresses:[]v1.NodeAddress{v1.NodeAddress{Type:"LegacyHostIP", Address:"120.0.0.251"}, v1.NodeAddress{Type:"InternalIP", Address:"120.0.0.251"}, v1.NodeAddress{Type:"Hostname", Address:"120.0.0.251"}}, DaemonEndpoints:v1.NodeDaemonEndpoints{KubeletEndpoint:v1.DaemonEndpoint{Port:10250}}, NodeInfo:v1.NodeS
ããŒã¯ã
ããã§åãåé¡ã
ãããã匷å¶çµäºãããã匷å¶çµäºç¶æ
ã§ã¹ã¿ãã¯ããå Žåã«è¡šç€ºãããŸãNormal Killing Killing container with docker id 472802bf1dba: Need to kill pod.
ããã³kubeletãã°ã¯æ¬¡ã®ããã«ãªããŸãã
skipping pod synchronization - [PLEG is not healthy: pleg was last seen active
k8s clusteããŒãžã§ã³ïŒ1.6.4
@xcompass kubeletæ§æã«--image-gc-high-threshold
ããã³--image-gc-low-threshold
ãã©ã°ã䜿çšããŠããŸããïŒ kubelet gc
dockerdeamonãå¿ããããŠããã®ã§ã¯ãªãããšæããŸãã
@alirezaDavidç§ã¯ããªããšåãåé¡ã«ééããŸããããããã®éå§ãšçµäºãéåžžã«é ããããŒããæã notReadyã«ãªããããŒãã§kubeletãåèµ·åããããdockerãåèµ·åãããšåé¡ã解決ããããã«èŠããŸãããããã¯æ£ããæ¹æ³ã§ã¯ãããŸããã
@ yu-yang2ãããæ£ç¢ºã«ãkubeletãåèµ·åããŸã
ããããkubeletãåèµ·åããåã«ã docker ps
ãšsystemctl -u docker
ããã§ãã¯ã¢ãŠãããŸãããããã¹ãŠãæ©èœããŠããããã§ãã
ãã®åé¡ã¯ãç¹ããšãªãŒãã¹ã±ãŒã©ãŒãåããkubernetesã§çºçããŸããã weaveã«ã¯å²ãåœãŠãIPã¢ãã¬ã¹ããããªãããšãå€æããŸããã ããã¯ããå®è¡ããããšã§æ€åºãããŸããã ãã®åé¡ããã¹ããŒã¿ã¹ipamãç¹ããŸãïŒ https ïŒ
æ ¹æ¬çãªåå ã¯ããã«ãããŸãïŒ https ïŒ
ããã¥ã¡ã³ãã¯ãªãŒãã¹ã±ãŒã©ãŒãšãŠã£ãŒãã«ã€ããŠèŠåããŠããŸãïŒ //www.weave.works/docs/net/latest/operational-guide/tasks/
weave --local status ipamãå®è¡ãããšãå€æ°ã®IPã¢ãã¬ã¹ãå²ãåœãŠãããæ°çŸã®äœ¿çšã§ããªãããŒãããããŸããã ããã¯ããªãŒãã¹ã±ãŒã©ãŒãweaveã«éç¥ããã«ã€ã³ã¹ã¿ã³ã¹ãçµäºããããã«çºçããŸãã ããã«ãããå®éã«æ¥ç¶ãããããŒãã¯ã»ãã®äžæ¡ãã«ãªããŸããã weave rmpeerã䜿çšããŠã䜿çšã§ããªããã¢ã®äžéšãã¯ãªã¢ããŸããã ããã«ãããiãå®è¡ããŠããããŒããIPã¢ãã¬ã¹ã®ã°ã«ãŒãã«ãªããŸããã 次ã«ãå®è¡äžã®ä»ã®ãŠã£ãŒãããŒãã«ç§»åãããããããããã€ãã®rmpeerã³ãã³ããå®è¡ããŸããïŒãããå¿ èŠãã©ããã¯ããããŸããïŒã
äžéšã®ec2ã€ã³ã¹ã¿ã³ã¹ãçµäºãããšãæ°ããã€ã³ã¹ã¿ã³ã¹ããªãŒãã¹ã±ãŒã©ãŒã«ãã£ãŠèµ·åãããããã«IPã¢ãã¬ã¹ãå²ãåœãŠãããŸããã
ããã«ã¡ã¯çããã ç§ã®å Žåããµã³ãããã¯ã¹ã«ã¯ãããã¯ãŒã¯åå空éããªãã£ãããããµã³ãããã¯ã¹ã®åé€ã«é¢ããPLEGã®åé¡ãçºçããŸããã https://github.com/kubernetes/kubernetes/issues/44307ã§èª¬æãããŠãããã®ç¶æ³
ç§ã®åé¡ã¯ïŒ
ã芧ã®ãšããããã®ãã°ã®ãã¹ãŠã®äººã1.6ã*ã®Kubernetesã䜿çšããŠããŸãã1.7ã§ä¿®æ£ããå¿ èŠããããŸãã
PSã ãªãªãžã³3.6ïŒkubernetes 1.6ïŒã§ãã®ç¶æ³ãèŠãŸããã
ããã«ã¡ã¯ã
ç§ã¯èªåã§PLEGã®åé¡ãæ±ããŠããŸãïŒAzureãk8s 1.7.7ïŒïŒ
Oct 5 08:13:27 k8s-agent-27569017-1 docker[1978]: E1005 08:13:27.386295 2209 remote_runtime.go:168] ListPodSandbox with filter "nil" from runtime service failed: rpc error: code = 4 desc = context deadline exceeded
Oct 5 08:13:27 k8s-agent-27569017-1 docker[1978]: E1005 08:13:27.386351 2209 kuberuntime_sandbox.go:197] ListPodSandbox failed: rpc error: code = 4 desc = context deadline exceeded
Oct 5 08:13:27 k8s-agent-27569017-1 docker[1978]: E1005 08:13:27.386360 2209 generic.go:196] GenericPLEG: Unable to retrieve pods: rpc error: code = 4 desc = context deadline exceeded
Oct 5 08:13:30 k8s-agent-27569017-1 docker[1978]: I1005 08:13:30.953599 2209 helpers.go:102] Unable to get network stats from pid 60677: couldn't read network stats: failure opening /proc/60677/net/dev: open /proc/60677/net/dev: no such file or directory
Oct 5 08:13:30 k8s-agent-27569017-1 docker[1978]: I1005 08:13:30.953634 2209 helpers.go:125] Unable to get udp stats from pid 60677: failure opening /proc/60677/net/udp: open /proc/60677/net/udp: no such file or directory
Oct 5 08:13:30 k8s-agent-27569017-1 docker[1978]: I1005 08:13:30.953642 2209 helpers.go:132] Unable to get udp6 stats from pid 60677: failure opening /proc/60677/net/udp6: open /proc/60677/net/udp6: no such file or directory
Oct 5 08:13:31 k8s-agent-27569017-1 docker[1978]: I1005 08:13:31.763914 2209 kubelet.go:1820] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 13h42m52.628402637s ago; threshold is 3m0s]
Oct 5 08:13:35 k8s-agent-27569017-1 docker[1978]: I1005 08:13:35.977487 2209 kubelet_node_status.go:467] Using Node Hostname from cloudprovider: "k8s-agent-27569017-1"
Oct 5 08:13:36 k8s-agent-27569017-1 docker[1978]: I1005 08:13:36.764105 2209 kubelet.go:1820] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 13h42m57.628610126s ago; threshold is 3m0s]
Oct 5 08:13:39 k8s-agent-27569017-1 docker[1275]: time="2017-10-05T08:13:39.185111999Z" level=warning msg="Health check error: rpc error: code = 4 desc = context deadline exceeded"
Oct 5 08:13:41 k8s-agent-27569017-1 docker[1978]: I1005 08:13:41.764235 2209 kubelet.go:1820] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 13h43m2.628732806s ago; threshold is 3m0s]
Oct 5 08:13:41 k8s-agent-27569017-1 docker[1978]: I1005 08:13:41.875074 2209 helpers.go:102] Unable to get network stats from pid 60677: couldn't read network stats: failure opening /proc/60677/net/dev: open /proc/60677/net/dev: no such file or directory
Oct 5 08:13:41 k8s-agent-27569017-1 docker[1978]: I1005 08:13:41.875102 2209 helpers.go:125] Unable to get udp stats from pid 60677: failure opening /proc/60677/net/udp: open /proc/60677/net/udp: no such file or directory
Oct 5 08:13:41 k8s-agent-27569017-1 docker[1978]: I1005 08:13:41.875113 2209 helpers.go:132] Unable to get udp6 stats from pid 60677: failure opening /proc/60677/net/udp6: open /proc/60677/net/udp6: no such file or directory
å®å®ããCoreOSã§v1.7.4+coreos.0
ãå®è¡ããŠããŸãã PLEGãåå ã§ãk8sããŒãã8æéããšã«é »ç¹ã«ããŠã³ããïŒãããŠãdockerãkubeletãµãŒãã¹ãåèµ·åãããŸã§èµ·åããªãïŒããšããããŸãã ã³ã³ããã¯å®è¡ãç¶ããŸãããk8sã§ã¯äžæãšããŠå ±åãããŸãã Kubesprayã䜿çšããŠãããã€ããããšãèšåããå¿
èŠããããŸãã
ã³ã³ãããäžèŠ§è¡šç€ºããããã«dockerãšéä¿¡ãããšãã®GRPCã®ããã¯ãªãã¢ã«ãŽãªãºã ã§ãããšæãããåé¡ã远跡ããŸããã ãã®PRhttps ïŒ//github.com/moby/moby/pull/33483ã¯ãããã¯ãªããæ倧2ç§ã«å€æŽãã17.06ã§å©çšã§ããŸãããkubernetesã¯1.8ãŸã§17.06ããµããŒãããŠããŸããã
åé¡ãåŒãèµ·ãããŠããPLEGã®è¡ã¯ããã§ãã
ããã¡ããŠã¹ã䜿çšããŠPLEGRelistIntervalã¡ããªãã¯ãšPLEGRelistLatencyã¡ããªãã¯ãæ€æ»ãããšãããããã¯ãªãã¢ã«ãŽãªãºã çè«ãšããªãäžèŽãã次ã®çµæãåŸãããŸããã
@ssboisenã°ã©ãã§å ±åããŠãããŠããããšãïŒåœŒãã¯é¢çœããã«èŠããŸãïŒïŒ
PLEGãåå ã§ãk8sããŒãã8æéããšã«é »ç¹ã«ããŠã³ããïŒãããŠãdockerãkubeletãµãŒãã¹ãåèµ·åãããŸã§èµ·åããªãïŒããšããããŸãã ã³ã³ããã¯å®è¡ãç¶ããŸãããk8sã§ã¯äžæãšããŠå ±åãããŸãã Kubesprayã䜿çšããŠãããã€ããããšãèšåããå¿ èŠããããŸãã
ç§ãæã£ãŠããããã€ãã®è³ªåïŒ
docker ps
ã¯æ£åžžã«å¿çããŸããïŒã³ã³ãããäžèŠ§è¡šç€ºããããã«dockerãšéä¿¡ãããšãã®GRPCã®ããã¯ãªãã¢ã«ãŽãªãºã ã§ãããšæãããåé¡ã远跡ããŸããã ãã®PRmoby / mobyïŒ33483ã¯ãããã¯ãªããæ倧2ç§ã«å€æŽãã17.06ã§å©çšã§ããŸãããkubernetesã¯1.8ãŸã§17.06ããµããŒãããŠããŸããã
ããªããèšåããmobyã®åé¡ã調ã¹ãŸãããããã®è°è«ã§ã¯ããã¹ãŠã®docker ps
åŒã³åºãã¯ãŸã æ£ããæ©èœããŠããŸããïŒdockerd <->ã³ã³ãããŒæ¥ç¶ãåæãããå Žåã§ãïŒã ããã¯ããªããèšåããPLEGã®åé¡ãšã¯ç°ãªãããã§ãã ãŸããkubeletã¯grpcã䜿çšããŠdockerdãšéä¿¡ããŸããã ããã¯dockershimãšéä¿¡ããããã«grpcã䜿çšããŸããããããã¯æ¬è³ªçã«åãããã»ã¹ã§ãããããäžæ¹ããŸã çããŠããéã«äžæ¹ã殺ãããïŒæ¥ç¶ã®åæã«ã€ãªããïŒãšããåé¡ã«ééããã¹ãã§ã¯ãããŸããã
grpc http grpc
kubelet <----> dockershim <----> dockerd <----> containerd
kubeletãã°ã«è¡šç€ºããããšã©ãŒã¡ãã»ãŒãžã¯äœã§ããïŒ äžèšã®ã³ã¡ã³ãã®ã»ãšãã©ã«ã¯ããã³ã³ããã¹ãæéãè¶ ããŸããããšãããšã©ãŒã¡ãã»ãŒãžããããŸããã
- dockerãškubeletã®ãããããåèµ·åãããšåé¡ã¯è§£æ±ºããŸããïŒ
å€æŽãããŸããã»ãšãã©ã®å Žåãkubeletãåèµ·åããã ãã§ååã§ãããDockerã®åèµ·åãå¿ èŠãªç¶æ³ããããŸããã
- åé¡ãçºçããå Žåã
docker ps
ã¯æ£åžžã«å¿çããŸããïŒ
PLEGãåäœããŠãããšãã«ãããŒãã§docker ps
ãå®è¡ããŠãåé¡ã¯ãããŸããã ç§ã¯ããã«ãŒã·ã ã«ã€ããŠç¥ããŸããã§ããããããåé¡ã§ããã®ã¯ã¯ãã¬ãããšããã«ãŒã·ã ã®éã®æ¥ç¶ã§ãããã©ããçåã«æããŸããã·ã ã¯ç»å±±ã®ããã¯ãªãã«ã€ãªããæéå
ã«çããããšãã§ããŸããã§ãããïŒ
ãã°ã®ãšã©ãŒã¡ãã»ãŒãžã¯ã次ã®2è¡ã®çµã¿åããã§ãã
generic.go:196] GenericPLEG: Unable to retrieve pods: rpc error: code = 14 desc = grpc: the connection is unavailable
kubelet.go:1820] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 11h5m56.959313178s ago; threshold is 3m0s]
ãã®åé¡ãããé©åã«ãããã°ã§ããããã«ãããå€ãã®æ å ±ãååŸããæ¹æ³ã«ã€ããŠäœãææ¡ã¯ãããŸããïŒ
ãããããæ£ããããšãããŠããk8sã§ã¯ãªããDockerã®åé¡ã§ãã ãã ããdockerãããã§èª€åäœããŠããçç±ãèŠã€ããããšãã§ããŸããã§ããã ãã¹ãŠã®CPU /ã¡ã¢ãª/ãã£ã¹ã¯ãªãœãŒã¹ã¯çŽ æŽãããã§ãã
dockerserviceãåèµ·åãããšè¯å¥œãªç¶æ ã«æ»ããŸãã
ãã®åé¡ãããé©åã«ãããã°ã§ããããã«ãããå€ãã®æ å ±ãååŸããæ¹æ³ã«ã€ããŠäœãææ¡ã¯ãããŸããïŒ
æåã®ã¹ãããã¯ãã©ã®ã³ã³ããŒãã³ãïŒdockershimãŸãã¯docker / containerdïŒããšã©ãŒã¡ãã»ãŒãžãè¿ãããã確èªããããšã ãšæããŸãã
ãããããkubeletãšdockerã®ãã°ãçžäºåç
§ããããšã§ãããç解ã§ããŸãã
ãããããæ£ããããšãããŠããk8sã§ã¯ãªããDockerã®åé¡ã§ãã ãã ããdockerãããã§èª€åäœããŠããçç±ãèŠã€ããããšãã§ããŸããã§ããã ãã¹ãŠã®CPU /ã¡ã¢ãª/ãã£ã¹ã¯ãªãœãŒã¹ã¯çŽ æŽãããã§ãã
ããã ããªãã®å ŽåãdockerããŒã¢ã³ãå®éã«ãã³ã°ããŠããããã«èŠããŸãã DockerããŒã¢ã³ããããã°ã¢ãŒãã§èµ·åããçºçãããšãã«ã¹ã¿ãã¯ãã¬ãŒã¹ãååŸã§ããŸãã
https://docs.docker.com/engine/admin/#force -a-stack-trace-to-be-logged
@yujuhong k8sã®è² è·ãã¹ãåŸã«ãã®åé¡ãåã³çºçããã»ãšãã©ãã¹ãŠã®ããŒããnot ready
ãªããããããæ°æ¥éã¯ãªãŒã³ã¢ããããŠãå埩ããŸããã§ããããã¹ãŠã®kubeletã§åé·ã¢ãŒããéãããã°ãååŸããŸããã以äžã§ã¯ããããã®ãã°ãåé¡ã®è§£æ±ºã«åœ¹ç«ã€ããšãé¡ã£ãŠããŸãã
Oct 24 21:16:39 docker34-91 kubelet[24165]: I1024 21:16:39.539054 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:39 docker34-91 kubelet[24165]: I1024 21:16:39.639305 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:39 docker34-91 kubelet[24165]: I1024 21:16:39.739585 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:39 docker34-91 kubelet[24165]: I1024 21:16:39.839829 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:39 docker34-91 kubelet[24165]: I1024 21:16:39.940111 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:40 docker34-91 kubelet[24165]: I1024 21:16:40.040374 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:40 docker34-91 kubelet[24165]: I1024 21:16:40.128789 24165 kubelet.go:2064] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Oct 24 21:16:40 docker34-91 kubelet[24165]: I1024 21:16:40.140634 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:40 docker34-91 kubelet[24165]: I1024 21:16:40.240851 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:40 docker34-91 kubelet[24165]: I1024 21:16:40.341125 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:40 docker34-91 kubelet[24165]: I1024 21:16:40.441471 24165 config.go:101] Looking for [api file], have seen map[api:{} file:{}]
Oct 24 21:16:40 docker34-91 kubelet[24165]: I1024 21:16:40.541781 24165 config.go:101] Looking for [api file], have seen map[api:{} file:{}]
Oct 24 21:16:40 docker34-91 kubelet[24165]: I1024 21:16:40.642070 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:40 docker34-91 kubelet[24165]: I1024 21:16:40.742347 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:40 docker34-91 kubelet[24165]: I1024 21:16:40.842562 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:40 docker34-91 kubelet[24165]: I1024 21:16:40.942867 24165 config.go:101] Looking for [api file], have seen map[api:{} file:{}]
Oct 24 21:16:41 docker34-91 kubelet[24165]: I1024 21:16:41.006656 24165 kubelet.go:1752] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 6m20.171705404s ago; threshold is 3m0s]
Oct 24 21:16:41 docker34-91 kubelet[24165]: I1024 21:16:41.043126 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:41 docker34-91 kubelet[24165]: I1024 21:16:41.143372 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:41 docker34-91 kubelet[24165]: I1024 21:16:41.243620 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:41 docker34-91 kubelet[24165]: I1024 21:16:41.343911 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:41 docker34-91 kubelet[24165]: I1024 21:16:41.444156 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:41 docker34-91 kubelet[24165]: I1024 21:16:41.544420 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:41 docker34-91 kubelet[24165]: I1024 21:16:41.644732 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:41 docker34-91 kubelet[24165]: I1024 21:16:41.745002 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:41 docker34-91 kubelet[24165]: I1024 21:16:41.845268 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:41 docker34-91 kubelet[24165]: I1024 21:16:41.945524 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 24 21:16:42 docker34-91 kubelet[24165]: I1024 21:16:42.045814 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
^C
[root@docker34-91 ~]# journalctl -u kubelet -f
-- Logs begin at Wed 2017-10-25 17:19:29 CST. --
Oct 27 10:22:35 docker34-91 kubelet[24165]: 00000000 6b 38 73 00 0a 0b 0a 02 76 31 12 05 45 76 65 6e |k8s.....v1..Even|
Oct 27 10:22:35 docker34-91 kubelet[24165]: 00000010 74 12 d3 03 0a 4f 0a 33 6c 64 74 65 73 74 2d 37 |t....O.3ldtest-7|
Oct 27 10:22:35 docker34-91 kubelet[24165]: 00000020 33 34 33 39 39 64 67 35 39 2d 33 33 38 32 38 37 |34399dg59-338287|
Oct 27 10:22:35 docker34-91 kubelet[24165]: 00000030 31 36 38 35 2d 78 32 36 70 30 2e 31 34 66 31 34 |1685-x26p0.14f14|
Oct 27 10:22:35 docker34-91 kubelet[24165]: 00000040 63 30 39 65 62 64 32 64 66 66 34 12 00 1a 0a 6c |c09ebd2dff4....l|
Oct 27 10:22:35 docker34-91 kubelet[24165]: 00000050 64 74 65 73 74 2d 30 30 35 22 00 2a 00 32 00 38 |dtest-005".*.2.8|
Oct 27 10:22:35 docker34-91 kubelet[24165]: 00000060 00 42 00 7a 00 12 6b 0a 03 50 6f 64 12 0a 6c 64 |.B.z..k..Pod..ld|
Oct 27 10:22:35 docker34-91 kubelet[24165]: 00000070 74 65 73 74 2d 30 30 35 1a 22 6c 64 74 65 73 74 |test-005."ldtest|
Oct 27 10:22:35 docker34-91 kubelet[24165]: 00000080 2d 37 33 34 33 39 39 64 67 35 39 2d 33 33 38 32 |-734399dg59-3382|
Oct 27 10:22:35 docker34-91 kubelet[24165]: 00000090 38 37 31 36 38 35 2d 78 32 36 70 30 22 24 61 35 |871685-x26p0"$a5|
Oct 27 10:23:02 docker34-91 kubelet[24165]: I1027 10:23:02.098922 24165 kubelet.go:2064] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=true reason: message:
Oct 27 10:23:02 docker34-91 kubelet[24165]: I1027 10:23:02.175027 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:02 docker34-91 kubelet[24165]: I1027 10:23:02.275290 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:02 docker34-91 kubelet[24165]: I1027 10:23:02.375594 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:02 docker34-91 kubelet[24165]: I1027 10:23:02.475872 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:02 docker34-91 kubelet[24165]: I1027 10:23:02.576140 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:02 docker34-91 kubelet[24165]: I1027 10:23:02.676412 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:02 docker34-91 kubelet[24165]: I1027 10:23:02.776613 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:02 docker34-91 kubelet[24165]: I1027 10:23:02.876855 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:02 docker34-91 kubelet[24165]: I1027 10:23:02.977126 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.000354 24165 status_manager.go:410] Status Manager: syncPod in syncbatch. pod UID: "a052cabc-bab9-11e7-92f6-3497f60062c3"
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.000509 24165 round_trippers.go:398] curl -k -v -XGET -H "Accept: application/vnd.kubernetes.protobuf, */*" -H "User-Agent: kubelet/v1.6.4 (linux/amd64) kubernetes/d6f4332" http://172.23.48.211:8080/api/v1/namespaces/ldtest-005/pods/ldtest-276aa6023f-1106740979-hbtcv
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.001753 24165 round_trippers.go:417] GET http://172.23.48.211:8080/api/v1/namespaces/ldtest-005/pods/ldtest-276aa6023f-1106740979-hbtcv 404 Not Found in 1 milliseconds
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.001768 24165 round_trippers.go:423] Response Headers:
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.001773 24165 round_trippers.go:426] Content-Type: application/vnd.kubernetes.protobuf
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.001776 24165 round_trippers.go:426] Date: Fri, 27 Oct 2017 02:23:03 GMT
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.001780 24165 round_trippers.go:426] Content-Length: 154
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.001838 24165 request.go:989] Response Body:
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000000 6b 38 73 00 0a 0c 0a 02 76 31 12 06 53 74 61 74 |k8s.....v1..Stat|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000010 75 73 12 81 01 0a 04 0a 00 12 00 12 07 46 61 69 |us...........Fai|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000020 6c 75 72 65 1a 33 70 6f 64 73 20 22 6c 64 74 65 |lure.3pods "ldte|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000030 73 74 2d 32 37 36 61 61 36 30 32 33 66 2d 31 31 |st-276aa6023f-11|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000040 30 36 37 34 30 39 37 39 2d 68 62 74 63 76 22 20 |06740979-hbtcv" |
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000050 6e 6f 74 20 66 6f 75 6e 64 22 08 4e 6f 74 46 6f |not found".NotFo|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000060 75 6e 64 2a 2e 0a 22 6c 64 74 65 73 74 2d 32 37 |und*.."ldtest-27|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000070 36 61 61 36 30 32 33 66 2d 31 31 30 36 37 34 30 |6aa6023f-1106740|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000080 39 37 39 2d 68 62 74 63 76 12 00 1a 04 70 6f 64 |979-hbtcv....pod|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000090 73 28 00 30 94 03 1a 00 22 00 |s(.0....".|
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.001885 24165 status_manager.go:425] Pod "ldtest-276aa6023f-1106740979-hbtcv" (a052cabc-bab9-11e7-92f6-3497f60062c3) does not exist on the server
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.001900 24165 status_manager.go:410] Status Manager: syncPod in syncbatch. pod UID: "a584c63e-bab7-11e7-92f6-3497f60062c3"
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.001946 24165 round_trippers.go:398] curl -k -v -XGET -H "Accept: application/vnd.kubernetes.protobuf, */*" -H "User-Agent: kubelet/v1.6.4 (linux/amd64) kubernetes/d6f4332" http://172.23.48.211:8080/api/v1/namespaces/ldtest-005/pods/ldtest-734399dg59-3382871685-x26p0
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.002559 24165 round_trippers.go:417] GET http://172.23.48.211:8080/api/v1/namespaces/ldtest-005/pods/ldtest-734399dg59-3382871685-x26p0 404 Not Found in 0 milliseconds
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.002569 24165 round_trippers.go:423] Response Headers:
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.002573 24165 round_trippers.go:426] Content-Type: application/vnd.kubernetes.protobuf
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.002577 24165 round_trippers.go:426] Date: Fri, 27 Oct 2017 02:23:03 GMT
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.002580 24165 round_trippers.go:426] Content-Length: 154
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.002627 24165 request.go:989] Response Body:
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000000 6b 38 73 00 0a 0c 0a 02 76 31 12 06 53 74 61 74 |k8s.....v1..Stat|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000010 75 73 12 81 01 0a 04 0a 00 12 00 12 07 46 61 69 |us...........Fai|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000020 6c 75 72 65 1a 33 70 6f 64 73 20 22 6c 64 74 65 |lure.3pods "ldte|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000030 73 74 2d 37 33 34 33 39 39 64 67 35 39 2d 33 33 |st-734399dg59-33|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000040 38 32 38 37 31 36 38 35 2d 78 32 36 70 30 22 20 |82871685-x26p0" |
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000050 6e 6f 74 20 66 6f 75 6e 64 22 08 4e 6f 74 46 6f |not found".NotFo|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000060 75 6e 64 2a 2e 0a 22 6c 64 74 65 73 74 2d 37 33 |und*.."ldtest-73|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000070 34 33 39 39 64 67 35 39 2d 33 33 38 32 38 37 31 |4399dg59-3382871|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000080 36 38 35 2d 78 32 36 70 30 12 00 1a 04 70 6f 64 |685-x26p0....pod|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000090 73 28 00 30 94 03 1a 00 22 00 |s(.0....".|
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.002659 24165 status_manager.go:425] Pod "ldtest-734399dg59-3382871685-x26p0" (a584c63e-bab7-11e7-92f6-3497f60062c3) does not exist on the server
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.002668 24165 status_manager.go:410] Status Manager: syncPod in syncbatch. pod UID: "2727277f-bab3-11e7-92f6-3497f60062c3"
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.002711 24165 round_trippers.go:398] curl -k -v -XGET -H "User-Agent: kubelet/v1.6.4 (linux/amd64) kubernetes/d6f4332" -H "Accept: application/vnd.kubernetes.protobuf, */*" http://172.23.48.211:8080/api/v1/namespaces/ldtest-005/pods/ldtest-4bc7922c25-2238154508-xt94x
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.003318 24165 round_trippers.go:417] GET http://172.23.48.211:8080/api/v1/namespaces/ldtest-005/pods/ldtest-4bc7922c25-2238154508-xt94x 404 Not Found in 0 milliseconds
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.003328 24165 round_trippers.go:423] Response Headers:
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.003332 24165 round_trippers.go:426] Date: Fri, 27 Oct 2017 02:23:03 GMT
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.003336 24165 round_trippers.go:426] Content-Length: 154
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.003339 24165 round_trippers.go:426] Content-Type: application/vnd.kubernetes.protobuf
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.003379 24165 request.go:989] Response Body:
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000000 6b 38 73 00 0a 0c 0a 02 76 31 12 06 53 74 61 74 |k8s.....v1..Stat|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000010 75 73 12 81 01 0a 04 0a 00 12 00 12 07 46 61 69 |us...........Fai|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000020 6c 75 72 65 1a 33 70 6f 64 73 20 22 6c 64 74 65 |lure.3pods "ldte|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000030 73 74 2d 34 62 63 37 39 32 32 63 32 35 2d 32 32 |st-4bc7922c25-22|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000040 33 38 31 35 34 35 30 38 2d 78 74 39 34 78 22 20 |38154508-xt94x" |
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000050 6e 6f 74 20 66 6f 75 6e 64 22 08 4e 6f 74 46 6f |not found".NotFo|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000060 75 6e 64 2a 2e 0a 22 6c 64 74 65 73 74 2d 34 62 |und*.."ldtest-4b|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000070 63 37 39 32 32 63 32 35 2d 32 32 33 38 31 35 34 |c7922c25-2238154|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000080 35 30 38 2d 78 74 39 34 78 12 00 1a 04 70 6f 64 |508-xt94x....pod|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000090 73 28 00 30 94 03 1a 00 22 00 |s(.0....".|
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.003411 24165 status_manager.go:425] Pod "ldtest-4bc7922c25-2238154508-xt94x" (2727277f-bab3-11e7-92f6-3497f60062c3) does not exist on the server
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.003423 24165 status_manager.go:410] Status Manager: syncPod in syncbatch. pod UID: "43dd5201-bab4-11e7-92f6-3497f60062c3"
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.003482 24165 round_trippers.go:398] curl -k -v -XGET -H "Accept: application/vnd.kubernetes.protobuf, */*" -H "User-Agent: kubelet/v1.6.4 (linux/amd64) kubernetes/d6f4332" http://172.23.48.211:8080/api/v1/namespaces/ldtest-005/pods/ldtest-g02c441308-3753936377-d6q69
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004051 24165 round_trippers.go:417] GET http://172.23.48.211:8080/api/v1/namespaces/ldtest-005/pods/ldtest-g02c441308-3753936377-d6q69 404 Not Found in 0 milliseconds
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004059 24165 round_trippers.go:423] Response Headers:
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004062 24165 round_trippers.go:426] Content-Type: application/vnd.kubernetes.protobuf
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004066 24165 round_trippers.go:426] Date: Fri, 27 Oct 2017 02:23:03 GMT
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004069 24165 round_trippers.go:426] Content-Length: 154
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004115 24165 request.go:989] Response Body:
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000000 6b 38 73 00 0a 0c 0a 02 76 31 12 06 53 74 61 74 |k8s.....v1..Stat|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000010 75 73 12 81 01 0a 04 0a 00 12 00 12 07 46 61 69 |us...........Fai|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000020 6c 75 72 65 1a 33 70 6f 64 73 20 22 6c 64 74 65 |lure.3pods "ldte|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000030 73 74 2d 67 30 32 63 34 34 31 33 30 38 2d 33 37 |st-g02c441308-37|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000040 35 33 39 33 36 33 37 37 2d 64 36 71 36 39 22 20 |53936377-d6q69" |
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000050 6e 6f 74 20 66 6f 75 6e 64 22 08 4e 6f 74 46 6f |not found".NotFo|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000060 75 6e 64 2a 2e 0a 22 6c 64 74 65 73 74 2d 67 30 |und*.."ldtest-g0|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000070 32 63 34 34 31 33 30 38 2d 33 37 35 33 39 33 36 |2c441308-3753936|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000080 33 37 37 2d 64 36 71 36 39 12 00 1a 04 70 6f 64 |377-d6q69....pod|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000090 73 28 00 30 94 03 1a 00 22 00 |s(.0....".|
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004142 24165 status_manager.go:425] Pod "ldtest-g02c441308-3753936377-d6q69" (43dd5201-bab4-11e7-92f6-3497f60062c3) does not exist on the server
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004148 24165 status_manager.go:410] Status Manager: syncPod in syncbatch. pod UID: "8fd9d66f-bab7-11e7-92f6-3497f60062c3"
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004195 24165 round_trippers.go:398] curl -k -v -XGET -H "Accept: application/vnd.kubernetes.protobuf, */*" -H "User-Agent: kubelet/v1.6.4 (linux/amd64) kubernetes/d6f4332" http://172.23.48.211:8080/api/v1/namespaces/ldtest-005/pods/ldtest-cf2eg79b08-3660220702-x0j2j
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004752 24165 round_trippers.go:417] GET http://172.23.48.211:8080/api/v1/namespaces/ldtest-005/pods/ldtest-cf2eg79b08-3660220702-x0j2j 404 Not Found in 0 milliseconds
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004761 24165 round_trippers.go:423] Response Headers:
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004765 24165 round_trippers.go:426] Date: Fri, 27 Oct 2017 02:23:03 GMT
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004769 24165 round_trippers.go:426] Content-Length: 154
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004773 24165 round_trippers.go:426] Content-Type: application/vnd.kubernetes.protobuf
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004812 24165 request.go:989] Response Body:
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000000 6b 38 73 00 0a 0c 0a 02 76 31 12 06 53 74 61 74 |k8s.....v1..Stat|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000010 75 73 12 81 01 0a 04 0a 00 12 00 12 07 46 61 69 |us...........Fai|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000020 6c 75 72 65 1a 33 70 6f 64 73 20 22 6c 64 74 65 |lure.3pods "ldte|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000030 73 74 2d 63 66 32 65 67 37 39 62 30 38 2d 33 36 |st-cf2eg79b08-36|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000040 36 30 32 32 30 37 30 32 2d 78 30 6a 32 6a 22 20 |60220702-x0j2j" |
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000050 6e 6f 74 20 66 6f 75 6e 64 22 08 4e 6f 74 46 6f |not found".NotFo|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000060 75 6e 64 2a 2e 0a 22 6c 64 74 65 73 74 2d 63 66 |und*.."ldtest-cf|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000070 32 65 67 37 39 62 30 38 2d 33 36 36 30 32 32 30 |2eg79b08-3660220|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000080 37 30 32 2d 78 30 6a 32 6a 12 00 1a 04 70 6f 64 |702-x0j2j....pod|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000090 73 28 00 30 94 03 1a 00 22 00 |s(.0....".|
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004841 24165 status_manager.go:425] Pod "ldtest-cf2eg79b08-3660220702-x0j2j" (8fd9d66f-bab7-11e7-92f6-3497f60062c3) does not exist on the server
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004853 24165 status_manager.go:410] Status Manager: syncPod in syncbatch. pod UID: "eb5a5f4a-baba-11e7-92f6-3497f60062c3"
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.004921 24165 round_trippers.go:398] curl -k -v -XGET -H "Accept: application/vnd.kubernetes.protobuf, */*" -H "User-Agent: kubelet/v1.6.4 (linux/amd64) kubernetes/d6f4332" http://172.23.48.211:8080/api/v1/namespaces/ldtest-005/pods/ldtest-9b47680d12-2536408624-jhp18
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.005436 24165 round_trippers.go:417] GET http://172.23.48.211:8080/api/v1/namespaces/ldtest-005/pods/ldtest-9b47680d12-2536408624-jhp18 404 Not Found in 0 milliseconds
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.005446 24165 round_trippers.go:423] Response Headers:
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.005450 24165 round_trippers.go:426] Content-Type: application/vnd.kubernetes.protobuf
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.005454 24165 round_trippers.go:426] Date: Fri, 27 Oct 2017 02:23:03 GMT
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.005457 24165 round_trippers.go:426] Content-Length: 154
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.005499 24165 request.go:989] Response Body:
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000000 6b 38 73 00 0a 0c 0a 02 76 31 12 06 53 74 61 74 |k8s.....v1..Stat|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000010 75 73 12 81 01 0a 04 0a 00 12 00 12 07 46 61 69 |us...........Fai|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000020 6c 75 72 65 1a 33 70 6f 64 73 20 22 6c 64 74 65 |lure.3pods "ldte|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000030 73 74 2d 39 62 34 37 36 38 30 64 31 32 2d 32 35 |st-9b47680d12-25|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000040 33 36 34 30 38 36 32 34 2d 6a 68 70 31 38 22 20 |36408624-jhp18" |
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000050 6e 6f 74 20 66 6f 75 6e 64 22 08 4e 6f 74 46 6f |not found".NotFo|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000060 75 6e 64 2a 2e 0a 22 6c 64 74 65 73 74 2d 39 62 |und*.."ldtest-9b|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000070 34 37 36 38 30 64 31 32 2d 32 35 33 36 34 30 38 |47680d12-2536408|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000080 36 32 34 2d 6a 68 70 31 38 12 00 1a 04 70 6f 64 |624-jhp18....pod|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000090 73 28 00 30 94 03 1a 00 22 00 |s(.0....".|
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.005526 24165 status_manager.go:425] Pod "ldtest-9b47680d12-2536408624-jhp18" (eb5a5f4a-baba-11e7-92f6-3497f60062c3) does not exist on the server
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.005533 24165 status_manager.go:410] Status Manager: syncPod in syncbatch. pod UID: "2db95639-bab5-11e7-92f6-3497f60062c3"
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.005588 24165 round_trippers.go:398] curl -k -v -XGET -H "Accept: application/vnd.kubernetes.protobuf, */*" -H "User-Agent: kubelet/v1.6.4 (linux/amd64) kubernetes/d6f4332" http://172.23.48.211:8080/api/v1/namespaces/ldtest-005/pods/ldtest-5f8ba1eag0-2191624653-dm374
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.006150 24165 round_trippers.go:417] GET http://172.23.48.211:8080/api/v1/namespaces/ldtest-005/pods/ldtest-5f8ba1eag0-2191624653-dm374 404 Not Found in 0 milliseconds
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.006176 24165 round_trippers.go:423] Response Headers:
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.006182 24165 round_trippers.go:426] Date: Fri, 27 Oct 2017 02:23:03 GMT
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.006189 24165 round_trippers.go:426] Content-Length: 154
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.006195 24165 round_trippers.go:426] Content-Type: application/vnd.kubernetes.protobuf
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.006251 24165 request.go:989] Response Body:
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000000 6b 38 73 00 0a 0c 0a 02 76 31 12 06 53 74 61 74 |k8s.....v1..Stat|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000010 75 73 12 81 01 0a 04 0a 00 12 00 12 07 46 61 69 |us...........Fai|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000020 6c 75 72 65 1a 33 70 6f 64 73 20 22 6c 64 74 65 |lure.3pods "ldte|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000030 73 74 2d 35 66 38 62 61 31 65 61 67 30 2d 32 31 |st-5f8ba1eag0-21|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000040 39 31 36 32 34 36 35 33 2d 64 6d 33 37 34 22 20 |91624653-dm374" |
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000050 6e 6f 74 20 66 6f 75 6e 64 22 08 4e 6f 74 46 6f |not found".NotFo|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000060 75 6e 64 2a 2e 0a 22 6c 64 74 65 73 74 2d 35 66 |und*.."ldtest-5f|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000070 38 62 61 31 65 61 67 30 2d 32 31 39 31 36 32 34 |8ba1eag0-2191624|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000080 36 35 33 2d 64 6d 33 37 34 12 00 1a 04 70 6f 64 |653-dm374....pod|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000090 73 28 00 30 94 03 1a 00 22 00 |s(.0....".|
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.006297 24165 status_manager.go:425] Pod "ldtest-5f8ba1eag0-2191624653-dm374" (2db95639-bab5-11e7-92f6-3497f60062c3) does not exist on the server
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.006330 24165 status_manager.go:410] Status Manager: syncPod in syncbatch. pod UID: "ecf58d7f-bab2-11e7-92f6-3497f60062c3"
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.006421 24165 round_trippers.go:398] curl -k -v -XGET -H "Accept: application/vnd.kubernetes.protobuf, */*" -H "User-Agent: kubelet/v1.6.4 (linux/amd64) kubernetes/d6f4332" http://172.23.48.211:8080/api/v1/namespaces/ldtest-005/pods/ldtest-0fe4761ce1-763135991-2gv5x
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.006983 24165 round_trippers.go:417] GET http://172.23.48.211:8080/api/v1/namespaces/ldtest-005/pods/ldtest-0fe4761ce1-763135991-2gv5x 404 Not Found in 0 milliseconds
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.006995 24165 round_trippers.go:423] Response Headers:
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.007001 24165 round_trippers.go:426] Content-Type: application/vnd.kubernetes.protobuf
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.007007 24165 round_trippers.go:426] Date: Fri, 27 Oct 2017 02:23:03 GMT
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.007014 24165 round_trippers.go:426] Content-Length: 151
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.007064 24165 request.go:989] Response Body:
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000000 6b 38 73 00 0a 0c 0a 02 76 31 12 06 53 74 61 74 |k8s.....v1..Stat|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000010 75 73 12 7f 0a 04 0a 00 12 00 12 07 46 61 69 6c |us..........Fail|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000020 75 72 65 1a 32 70 6f 64 73 20 22 6c 64 74 65 73 |ure.2pods "ldtes|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000030 74 2d 30 66 65 34 37 36 31 63 65 31 2d 37 36 33 |t-0fe4761ce1-763|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000040 31 33 35 39 39 31 2d 32 67 76 35 78 22 20 6e 6f |135991-2gv5x" no|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000050 74 20 66 6f 75 6e 64 22 08 4e 6f 74 46 6f 75 6e |t found".NotFoun|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000060 64 2a 2d 0a 21 6c 64 74 65 73 74 2d 30 66 65 34 |d*-.!ldtest-0fe4|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000070 37 36 31 63 65 31 2d 37 36 33 31 33 35 39 39 31 |761ce1-763135991|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000080 2d 32 67 76 35 78 12 00 1a 04 70 6f 64 73 28 00 |-2gv5x....pods(.|
Oct 27 10:23:03 docker34-91 kubelet[24165]: 00000090 30 94 03 1a 00 22 00 |0....".|
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.007106 24165 status_manager.go:425] Pod "ldtest-0fe4761ce1-763135991-2gv5x" (ecf58d7f-bab2-11e7-92f6-3497f60062c3) does not exist on the server
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.077334 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.177546 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.277737 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.377939 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.478169 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.578369 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.603649 24165 eviction_manager.go:197] eviction manager: synchronize housekeeping
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.678573 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.682080 24165 summary.go:389] Missing default interface "eth0" for node:172.23.34.91
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.682132 24165 summary.go:389] Missing default interface "eth0" for pod:kube-system_kube-proxy-qcft5
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.682176 24165 helpers.go:744] eviction manager: observations: signal=imagefs.available, available: 515801344Ki, capacity: 511750Mi, time: 2017-10-27 10:22:56.499173632 +0800 CST
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.682197 24165 helpers.go:744] eviction manager: observations: signal=imagefs.inodesFree, available: 523222251, capacity: 500Mi, time: 2017-10-27 10:22:56.499173632 +0800 CST
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.682203 24165 helpers.go:746] eviction manager: observations: signal=allocatableMemory.available, available: 65544340Ki, capacity: 65581868Ki
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.682207 24165 helpers.go:744] eviction manager: observations: signal=memory.available, available: 57973412Ki, capacity: 65684268Ki, time: 2017-10-27 10:22:56.499173632 +0800 CST
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.682213 24165 helpers.go:744] eviction manager: observations: signal=nodefs.available, available: 99175128Ki, capacity: 102350Mi, time: 2017-10-27 10:22:56.499173632 +0800 CST
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.682218 24165 helpers.go:744] eviction manager: observations: signal=nodefs.inodesFree, available: 104818019, capacity: 100Mi, time: 2017-10-27 10:22:56.499173632 +0800 CST
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.682233 24165 eviction_manager.go:292] eviction manager: no resources are starved
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.778792 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.879040 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:03 docker34-91 kubelet[24165]: I1027 10:23:03.979304 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:04 docker34-91 kubelet[24165]: I1027 10:23:04.079534 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:04 docker34-91 kubelet[24165]: I1027 10:23:04.179753 24165 config.go:101] Looking for [api file], have seen map[api:{} file:{}]
Oct 27 10:23:04 docker34-91 kubelet[24165]: I1027 10:23:04.280026 24165 config.go:101] Looking for [api file], have seen map[api:{} file:{}]
Oct 27 10:23:04 docker34-91 kubelet[24165]: I1027 10:23:04.380246 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:04 docker34-91 kubelet[24165]: I1027 10:23:04.480450 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:04 docker34-91 kubelet[24165]: I1027 10:23:04.580695 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:04 docker34-91 kubelet[24165]: I1027 10:23:04.680957 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:04 docker34-91 kubelet[24165]: I1027 10:23:04.781224 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:04 docker34-91 kubelet[24165]: I1027 10:23:04.881418 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:04 docker34-91 kubelet[24165]: I1027 10:23:04.981643 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.081882 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.182810 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.283410 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.383626 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.483942 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.584211 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.684460 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.784699 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.884949 24165 config.go:101] Looking for [api file], have seen map[file:{} api:{}]
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.960855 24165 factory.go:115] Factory "docker" was unable to handle container "/system.slice/data-docker-overlay-c0d3c4b3834cfe9f12cd5c35345cab9c8e71bb64c689c8aea7a458c119a5a54e-merged.mount"
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.960885 24165 factory.go:108] Factory "systemd" can handle container "/system.slice/data-docker-overlay-c0d3c4b3834cfe9f12cd5c35345cab9c8e71bb64c689c8aea7a458c119a5a54e-merged.mount", but ignoring.
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.960906 24165 manager.go:867] ignoring container "/system.slice/data-docker-overlay-c0d3c4b3834cfe9f12cd5c35345cab9c8e71bb64c689c8aea7a458c119a5a54e-merged.mount"
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.960912 24165 factory.go:115] Factory "docker" was unable to handle container "/system.slice/data-docker-overlay-ce9656ff9d3cd03baaf93e42d0874377fa37bfde6c9353b3ba954c90bf4332f3-merged.mount"
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.960919 24165 factory.go:108] Factory "systemd" can handle container "/system.slice/data-docker-overlay-ce9656ff9d3cd03baaf93e42d0874377fa37bfde6c9353b3ba954c90bf4332f3-merged.mount", but ignoring.
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.960926 24165 manager.go:867] ignoring container "/system.slice/data-docker-overlay-ce9656ff9d3cd03baaf93e42d0874377fa37bfde6c9353b3ba954c90bf4332f3-merged.mount"
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.960931 24165 factory.go:115] Factory "docker" was unable to handle container "/system.slice/data-docker-overlay-b3600c0fe81445773b9241c5d1da8b1f97612d0a235f8b32139478a5717f79e1-merged.mount"
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.960937 24165 factory.go:108] Factory "systemd" can handle container "/system.slice/data-docker-overlay-b3600c0fe81445773b9241c5d1da8b1f97612d0a235f8b32139478a5717f79e1-merged.mount", but ignoring.
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.960944 24165 manager.go:867] ignoring container "/system.slice/data-docker-overlay-b3600c0fe81445773b9241c5d1da8b1f97612d0a235f8b32139478a5717f79e1-merged.mount"
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.960949 24165 factory.go:115] Factory "docker" was unable to handle container "/system.slice/data-docker-overlay-ed2fe0d57c56cf6b051e1bda1ca0185ceef4756b1a8f9af4c19f4e512bcc60f4-merged.mount"
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.960955 24165 factory.go:108] Factory "systemd" can handle container "/system.slice/data-docker-overlay-ed2fe0d57c56cf6b051e1bda1ca0185ceef4756b1a8f9af4c19f4e512bcc60f4-merged.mount", but ignoring.
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.960979 24165 manager.go:867] ignoring container "/system.slice/data-docker-overlay-ed2fe0d57c56cf6b051e1bda1ca0185ceef4756b1a8f9af4c19f4e512bcc60f4-merged.mount"
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.960984 24165 factory.go:115] Factory "docker" was unable to handle container "/system.slice/data-docker-overlay-0ba6483a0117c539493cd269be9f87d31d1d61aa813e7e0381c5f5d8b0623275-merged.mount"
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.960990 24165 factory.go:108] Factory "systemd" can handle container "/system.slice/data-docker-overlay-0ba6483a0117c539493cd269be9f87d31d1d61aa813e7e0381c5f5d8b0623275-merged.mount", but ignoring.
Oct 27 10:23:05 docker34-91 kubelet[24165]: I1027 10:23:05.960997 24165 manager.go:867] ignoring container "/system.slice/data-docker-overlay-0ba6483a0117c539493cd269be9f87d31d1d61aa813e7e0381c5f5d8b0623275-merged.mount"
åæ§ã®åé¡ããããïŒ
Oct 28 09:15:38 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: E1028 09:15:38.711430 3299 pod_workers.go:182] Error syncing pod 7d3b94f3-afa7-11e7-aaec-06936c368d26 ("pickup-566929041-bn8t9_staging(7d3b94f3-afa7-11e7-aaec-06936c368d26)"), skipping: rpc error: code = 4 desc = context deadline exceeded
Oct 28 09:15:51 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: E1028 09:15:51.439135 3299 kuberuntime_manager.go:843] PodSandboxStatus of sandbox "9c1c1f2d4a9d277a41a97593c330f41e00ca12f3ad858c19f61fd155d18d795e" for pod "pickup-566929041-bn8t9_staging(7d3b94f3-afa7-11e7-aaec-06936c368d26)" error: rpc error: code = 4 desc = context deadline exceeded
Oct 28 09:15:51 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: E1028 09:15:51.439188 3299 generic.go:241] PLEG: Ignoring events for pod pickup-566929041-bn8t9/staging: rpc error: code = 4 desc = context deadline exceeded
Oct 28 09:15:51 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: E1028 09:15:51.711168 3299 pod_workers.go:182] Error syncing pod 7d3b94f3-afa7-11e7-aaec-06936c368d26 ("pickup-566929041-bn8t9_staging(7d3b94f3-afa7-11e7-aaec-06936c368d26)"), skipping: rpc error: code = 4 desc = context deadline exceeded
Oct 28 09:16:03 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: E1028 09:16:03.711164 3299 pod_workers.go:182] Error syncing pod 7d3b94f3-afa7-11e7-aaec-06936c368d26 ("pickup-566929041-bn8t9_staging(7d3b94f3-afa7-11e7-aaec-06936c368d26)"), skipping: rpc error: code = 4 desc = context deadline exceeded
Oct 28 09:16:18 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: E1028 09:16:18.715381 3299 pod_workers.go:182] Error syncing pod 7d3b94f3-afa7-11e7-aaec-06936c368d26 ("pickup-566929041-bn8t9_staging(7d3b94f3-afa7-11e7-aaec-06936c368d26)"), skipping: rpc error: code = 4 desc = context deadline exceeded
Oct 28 09:16:33 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: E1028 09:16:33.711198 3299 pod_workers.go:182] Error syncing pod 7d3b94f3-afa7-11e7-aaec-06936c368d26 ("pickup-566929041-bn8t9_staging(7d3b94f3-afa7-11e7-aaec-06936c368d26)"), skipping: rpc error: code = 4 desc = context deadline exceeded
Oct 28 09:16:46 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: E1028 09:16:46.712983 3299 pod_workers.go:182] Error syncing pod 7d3b94f3-afa7-11e7-aaec-06936c368d26 ("pickup-566929041-bn8t9_staging(7d3b94f3-afa7-11e7-aaec-06936c368d26)"), skipping: rpc error: code = 4 desc = context deadline exceeded
Oct 28 09:16:51 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: I1028 09:16:51.711142 3299 kubelet.go:1820] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m0.31269053s ago; threshold is 3m0s]
Oct 28 09:16:56 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: I1028 09:16:56.711341 3299 kubelet.go:1820] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m5.312886434s ago; threshold is 3m0s]
Oct 28 09:17:01 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: I1028 09:17:01.351771 3299 kubelet_node_status.go:734] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2017-10-28 09:17:01.35173325 +0000 UTC LastTransitionTime:2017-10-28 09:17:01.35173325 +0000 UTC Reason:KubeletNotReady Message:PLEG is not healthy: pleg was last seen active 3m9.95330596s ago; threshold is 3m0s}
Oct 28 09:17:01 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: I1028 09:17:01.711552 3299 kubelet.go:1820] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m10.31309378s ago; threshold is 3m0s]
Oct 28 09:17:06 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: I1028 09:17:06.711871 3299 kubelet.go:1820] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m15.313406671s ago; threshold is 3m0s]
Oct 28 09:17:11 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: I1028 09:17:11.712162 3299 kubelet.go:1820] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m20.313691126s ago; threshold is 3m0s]
Oct 28 09:17:12 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: 2017/10/28 09:17:12 transport: http2Server.HandleStreams failed to read frame: read unix /var/run/dockershim.sock->@: use of closed network connection
Oct 28 09:17:12 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: 2017/10/28 09:17:12 transport: http2Client.notifyError got notified that the client transport was broken EOF.
Oct 28 09:17:12 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: 2017/10/28 09:17:12 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/dockershim.sock: connect: no such file or directory"; Reconnecting to {/var/run/dockershim.sock <nil>}
Oct 28 09:17:12 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: E1028 09:17:12.556535 3299 kuberuntime_manager.go:843] PodSandboxStatus of sandbox "9c1c1f2d4a9d277a41a97593c330f41e00ca12f3ad858c19f61fd155d18d795e" for pod "pickup-566929041-bn8t9_staging(7d3b94f3-afa7-11e7-aaec-06936c368d26)" error: rpc error: code = 13 desc = transport is closing
ãããã®ã¡ãã»ãŒãžã®åŸã kubelet
ã¯åèµ·åã«ãŒãã«å
¥ããŸããã
Oct 28 09:17:12 ip-10-72-17-119.us-west-2.compute.internal systemd[1]: kube-kubelet.service: Main process exited, code=exited, status=1/FAILURE
Oct 28 09:18:42 ip-10-72-17-119.us-west-2.compute.internal systemd[1]: kube-kubelet.service: State 'stop-final-sigterm' timed out. Killing.
Oct 28 09:18:42 ip-10-72-17-119.us-west-2.compute.internal systemd[1]: kube-kubelet.service: Killing process 1661 (calico) with signal SIGKILL.
Oct 28 09:20:12 ip-10-72-17-119.us-west-2.compute.internal systemd[1]: kube-kubelet.service: Processes still around after final SIGKILL. Entering failed mode.
Oct 28 09:20:12 ip-10-72-17-119.us-west-2.compute.internal systemd[1]: Stopped Kubernetes Kubelet.
Oct 28 09:20:12 ip-10-72-17-119.us-west-2.compute.internal systemd[1]: kube-kubelet.service: Unit entered failed state.
Oct 28 09:20:12 ip-10-72-17-119.us-west-2.compute.internal systemd[1]: kube-kubelet.service: Failed with result 'exit-code'.
æåŸã®ã¡ãã»ãŒãžã¯æ¬¡ã®ãšããã§ããDockerã®åé¡ã®ããã§ãã
Oct 28 09:17:12 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: 2017/10/28 09:17:12 transport: http2Server.HandleStreams failed to read frame: read unix /var/run/dockershim.sock->@: use of closed network connection
Oct 28 09:17:12 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: 2017/10/28 09:17:12 transport: http2Client.notifyError got notified that the client transport was broken EOF.
Oct 28 09:17:12 ip-10-72-17-119.us-west-2.compute.internal kubelet[3299]: 2017/10/28 09:17:12 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial unix /var/run/dockershim.sock: connect: no such file or directory"; Reconnecting to {/var/run/dockershim.sock <nil>}
æåŸã®ã¡ãã»ãŒãžã¯dockershimããã§ãã ãããã®ãã°ãéåžžã«åœ¹ç«ã¡ãŸãã
ããã«ã¡ã¯ãKubernetes 1.7.10ãKops @ AWSã«åºã¥ããŠãããCalicoãšCoreOSã䜿çšããŠããŸãã
åãPLEGã®åé¡ããããŸã
Ready False KubeletNotReady PLEG is not healthy: pleg was last seen active 3m29.396986143s ago; threshold is 3m0s
ç§ãã¡ãæ±ããŠããå¯äžã®è¿œå ã®åé¡ã¯ãæè¿ç¹ã«1.7.8以éã§åãããã€ãããšãã«çºçãããšæããŸããããšãã°ãæ°ããããŒãžã§ã³ã®ã¢ããªãæã£ãŠããŠãå€ãã¬ããªã«ã»ãããããŠã³ããããã«ãããšãæ°ããã¬ããªã«ã»ãããäžç·ã«ã¹ãã³ãããŸãããããã以åã®ãããã€ã¡ã³ãããŒãžã§ã³ã®ãããã¯ããçµäºãç¶æ ã®ãŸãŸã«ãªããŸãã
次ã«ãæåã§force kill them
åãPLEGã®åé¡ããããŸãk8s1.8.1
+1
1.6.9
Docker1.12.6ã䜿çš
+1
1.8.2
+1
1.6.0
+1ããŒããNotReadyç¶æ ã«ãªãããšã¯ãKubernets 1.8.5ã«ã¢ããã°ã¬ãŒãããåŸãéå»2æ¥éã§ã»ãŒäžè²«ããŠçºçããŠããŸããã ç§ã«ãšã£ãŠã®åé¡ã¯ãã¯ã©ã¹ã¿ãŒãªãŒãã¹ã±ãŒã©ãŒãã¢ããã°ã¬ãŒãããªãã£ãããšã ãšæããŸãã ãªãŒãã¹ã±ãŒã©ãŒã1.03ïŒãã«ã 0.3.0ïŒã«ã¢ããã°ã¬ãŒãããåŸããNotReadyãç¶æ ã®ããŒãã¯è¡šç€ºãããŸããã åã³å®å®ããã¯ã©ã¹ã¿ãŒãããããã§ãã
枯湟åŽåè ãã¶ãäžãã£ãŠããŠãããã¹ãã¯éã¢ã¯ãã£ãã§ãã£ãŠã¯ãªããŸãã
ããã§ãåãã1.8.5
äœããŒãžã§ã³ããæŽæ°ããã空ããäœæããŸãã
# free -mg
total used free shared buff/cache available
Mem: 15 2 8 0 5 12
Swap: 15 0 15
top - 04:34:39 up 24 days, 6:23, 2 users, load average: 31.56, 83.38, 66.29
Tasks: 432 total, 5 running, 427 sleeping, 0 stopped, 0 zombie
%Cpu(s): 9.2 us, 1.9 sy, 0.0 ni, 87.5 id, 1.3 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem : 16323064 total, 8650144 free, 2417236 used, 5255684 buff/cache
KiB Swap: 16665596 total, 16646344 free, 19252 used. 12595460 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
31905 root 20 0 1622320 194096 51280 S 14.9 1.2 698:10.66 kubelet
19402 root 20 0 12560 9696 1424 R 10.3 0.1 442:05.00 memtester
2626 root 20 0 12560 9660 1392 R 9.6 0.1 446:41.38 memtester
8680 root 20 0 12560 9660 1396 R 9.6 0.1 444:34.38 memtester
15004 root 20 0 12560 9704 1432 R 9.6 0.1 443:04.98 memtester
1663 root 20 0 8424940 424912 20556 S 4.6 2.6 2809:24 dockerd
409 root 20 0 49940 37068 20648 S 2.3 0.2 144:03.37 calico-felix
551 root 20 0 631788 20952 11824 S 1.3 0.1 100:36.78 costor
9527 root 20 0 10.529g 24800 13612 S 1.0 0.2 3:43.55 etcd
2608 root 20 0 421936 6040 3288 S 0.7 0.0 31:29.78 containerd-shim
4136 root 20 0 780344 24580 12316 S 0.7 0.2 45:58.60 costor
4208 root 20 0 755756 22208 12176 S 0.7 0.1 41:49.58 costor
8665 root 20 0 210344 5960 3208 S 0.7 0.0 31:27.75 cont
çŸåšã以äžã®ç¶æ³ãèŠã€ãããŸããã
Docker Storage Setupãã·ã³ããŒã«ã®80ïŒ ã䜿çšããããã«æ§æãããŠãããããkubeletã®ããŒããšãã¯ã·ã§ã³ã¯10ïŒ ã§ããã ã©ã¡ããæ©æ¢°å å·¥ã§ã¯ãããŸããã§ããã
Dockerãäœããã®åœ¢ã§å
éšçã«ã¯ã©ãã·ã¥ããkubeletã«ãã®PLEGãšã©ãŒãçºçããŸããã
kubeletã®ããŒããšãã¯ã·ã§ã³ïŒimagefs.availableïŒã20ïŒ
ã«å¢ãããšãDockerã®ã»ããã¢ããããããããkubeletã¯å€ãã€ã¡ãŒãžã®åé€ãéå§ããŸããã
1.8ã§ã¯ãimage-gc-thresholdããhard-evictionã«å€æŽããééã£ãäžèŽãããã©ã¡ãŒã¿ãŒãéžæããŸããã
ããã«ã€ããŠã¯ãä»ããã¯ã©ã¹ã¿ãŒã芳å¯ããŸãã
ä¹
éšïŒ1.8.5
DockerïŒ1.12.6
OSïŒRHEL7
prometheusããã®å
éškubelet_pleg_relist_latency_microseconds
ã¡ããªãã¯ãèŠããšãããã¯çãããããã«èŠããŸãã
kopsã¯coreOSã§kube1.8.4ãã€ã³ã¹ããŒã«ããŸãã
docker info
Containers: 246
Running: 222
Paused: 0
Stopped: 24
Images: 30
Server Version: 17.09.0-ce
Storage Driver: overlay
Backing Filesystem: extfs
Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: v0.13.2 (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
Profile: default
selinux
Kernel Version: 4.13.16-coreos-r2
Operating System: Container Linux by CoreOS 1576.4.0 (Ladybug)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 14.69GiB
Name: ip-172-20-120-53.eu-west-1.compute.internal
ID: SI53:ECLM:HXFE:LOVY:STTS:C4X2:WRFK:UGBN:7NYP:4N3E:MZGS:EAVM
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
+1
ãªãªãžã³v3.7.0
kubernetes v1.7.6
docker v1.12.6
OS CentOS 7.4
ã©ã³ã¿ã€ã ã³ã³ããGCããããã®äœæãšçµäºã«åœ±é¿ãäžããããã§ã
GCãç¡å¹ã«ããåŸã«äœãèµ·ãã£ãã®ããå ±åããŠã¿ãŸãããã
ç§ã®å ŽåãCNIã¯ç¶æ³ãåŠçããŸããã
ç§ã®åæã«ãããšãã³ãŒãã·ãŒã±ã³ã¹ã¯æ¬¡ã®ãšããã§ã
1. kuberuntime_gc.go: client.StopPodSandbox (Timeout Default: 2m)
-> docker_sandbox.go: StopPodSandbox
-> cni.go: TearDownPod
-> CNI deleteFromNetwork (Timeout Default: 3m) <- Nothing gonna happen if CNI doesn't handle this situation.
-> docker_service.go: StopContainer
2. kuberuntime_gc.go: client.RemovePodSandbox
StopPodSandboxã¯ã¿ã€ã ã¢ãŠãäŸå€ãçºçãããåŠçããã«æ»ã£ãŠããããµã³ãããã¯ã¹ãåé€ããŸã
ãã ããStopPodSandboxãã¿ã€ã ã¢ãŠãããåŸãCNIããã»ã¹ã¯é²è¡äžã§ãã
ããã¯ãkubeletã¹ã¬ãããCNIããã»ã¹ã«ãã£ãŠäžè¶³ããŠãããããçµæãšããŠkubeletãPLEGãé©åã«ç£èŠã§ããªãããã§ãã
ãã®åé¡ã¯ãCNI_NSã空ã®ãšãã«æ»ãããã«CNIãå€æŽããããšã§è§£æ±ºããŸããïŒãããããããã§ããããšãæå³ããããïŒã
ïŒãšããã§ãCNIãã©ã°ã€ã³ãšããŠkuryr-kubernetesã䜿çšããŠããŸãïŒ
ãããçããã®ã圹ã«ç«ãŠã°å¹žãã§ãã
@esevanããããææ¡ããŠããããŸããïŒ
@rphillipsãã®ãã°ã¯å®éã«ã¯CNIãã°ã«è¿ããã®ã§ãããåäœã詳ãã調ã¹ãåŸã確å®ã«ããããopenstack / kuryr-kubernetesã«ã¢ããããŒãããŸãã
ç§ãã¡ã®å Žåãããã¯https://github.com/moby/moby/issues/33820ã«é¢é£ããŠããŸã
Dockerã³ã³ããã®ã¿ã€ã ã¢ãŠããåæ¢ãããšãããŒãã¯PLEGã¡ãã»ãŒãžã§ready / notReadyã®éã§ãã©ããã³ã°ãéå§ããŸãã
Dockerã®ããŒãžã§ã³ãå
ã«æ»ããšãåé¡ãä¿®æ£ãããŸãã ïŒ17.09-ce-> 12.06ïŒ
kubelet v1.9.1ãšåããšã©ãŒãã°ã
...
Jan 15 12:36:52 l23-27-101 kubelet[7335]: I0115 12:36:52.884617 7335 status_manager.go:136] Kubernetes client is nil, not starting status manager.
Jan 15 12:36:52 l23-27-101 kubelet[7335]: I0115 12:36:52.884636 7335 kubelet.go:1767] Starting kubelet main sync loop.
Jan 15 12:36:52 l23-27-101 kubelet[7335]: I0115 12:36:52.884692 7335 kubelet.go:1778] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jan 15 12:36:52 l23-27-101 kubelet[7335]: E0115 12:36:52.884788 7335 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Jan 15 12:36:52 l23-27-101 kubelet[7335]: I0115 12:36:52.885001 7335 volume_manager.go:247] Starting Kubelet Volume Manager
...
誰ããdocker> 12.6ã§ãã®åé¡ãæ±ããŠããŸããïŒ ïŒãµããŒããããŠããªãããŒãžã§ã³17.09ãé€ãïŒ
13.1ãŸãã¯17.06ã«åãæ¿ããããšã圹ç«ã€ãã©ããçåã«æã£ãŠããŸãã
@sybnex 17.03ãã¯ã©ã¹ã¿ãŒã§ãã®åé¡ãæ±ããŠããŸããããã¯ãCNIã®ãã°ã«æããã䌌ãŠããŸãã
ç§ã«ãšã£ãŠãããã¯ãkubeletãããŠã¹ããŒãã³ã°ã¿ã¹ã¯ãå®è¡ããããã«CPUã倧éã«äœ¿çšããŠããããã«çºçããŸããããã®çµæãDockerã«CPUæéãæ®ã£ãŠããŸããã§ããã ããŠã¹ããŒãã³ã°ã®ééãçãããããšã§ãåé¡ã¯è§£æ±ºããŸããã
@esevan ïŒkuryr-kubernetesããããããã ããã°å¹žãã§ã:-)
åèãŸã§ã«ãOrigin 1.5 / Kubernetes 1.5ãšKuryrïŒæåã®ããŒãžã§ã³ïŒãåé¡ãªã䜿çšããŠããŸã:)
@livelace以éã®ããŒãžã§ã³ã䜿çšããªãçç±ã¯ãããŸããïŒ
@celebdorå¿ èŠã¯ãããŸããããã¹ãŠãæ©èœããŸã:) Origin + Openstackã䜿çšãããããã®ããŒãžã§ã³ã¯ãã¹ãŠã®ããŒãºãã«ããŒããŸããKubernetes/ Openstackã®æ°æ©èœã¯å¿ èŠãããŸãããKuryrã¯æ©èœããŸãã 2ã€ã®è¿œå ããŒã ãã€ã³ãã©ã¹ãã©ã¯ãã£ã«åå ãããšãåé¡ãçºçããå¯èœæ§ããããŸãã
ããã©ã«ãã®pleg-relist-thresholdã¯3åã§ãã
pleg-relist-thresholdãæ§æå¯èœã«ããŠããããã倧ããªå€ãèšå®ã§ããªãã®ã¯ãªãã§ããã
ç§ã¯ãããè¡ãããã®PRãè¡ããŸããã
誰ããèŠãããšãã§ããŸããïŒ
https://github.com/kubernetes/kubernetes/pull/58279
PLEGãšProbeManagerã«ã€ããŠæ··ä¹±ãçããŸãã
PLEGã¯ãããŒãå
ã§ããããšã³ã³ãããæ£åžžã«ä¿æããå¿
èŠããããŸãã
ProbeManagerã¯ãããŒãå
ã®ã³ã³ããã®æ£åžžæ§ãä¿æããŸãã
2ã€ã®ã¢ãžã¥ãŒã«ã«åãããšããããã®ã¯ãªãã§ããïŒ
ProbeManagerã¯ãã³ã³ãããåæ¢ããŠããããšãæ€åºãããšãã³ã³ãããåèµ·åããŸããåæã«
PLEGãã³ã³ãããåæ¢ããŠããããšãæ€åºããå ŽåãPLEGã¯kubeletã«åãããšãè¡ãããã«æ瀺ããã€ãã³ããäœæããŸãã
äºïŒ
+1
Kubernetes v1.8.4
@celebdor cniãããŒã¢ã³åããããã®ã«æŽæ°ããåŸãcniããããªãã§å®å®åãããŸããã
+1
kubernetes v1.9.2
docker 17.03.2-ce
+1
kubernetes v1.9.2
docker 17.03.2-ce
kubeletãã°ã®ãšã©ãŒãã°ïŒ
Feb 27 16:19:12 node-2 kubelet: E0227 16:19:12.839866 47544 remote_runtime.go:169] ListPodSandbox with filter nil from runtime service failed: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Feb 27 16:19:12 node-2 kubelet: E0227 16:19:12.839919 47544 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Feb 27 16:19:12 node-2 kubelet: E0227 16:19:12.839937 47544 generic.go:197] GenericPLEG: Unable to retrieve pods: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
kubeletã¯dockerclientïŒhttpClientïŒã䜿çšããŠã2åã®ã¿ã€ã ã¢ãŠãã§ContainerListïŒall status && io.kubernetes.docker.type == "podsandbox"ïŒãåŒã³åºããŸãã
docker ps -a --filter "label=io.kubernetes.docker.type=podsandbox"
ããŒããNotReadyã«ãªã£ããšãã«ã³ãã³ããçŽæ¥å®è¡ãããšããããã°ã«åœ¹ç«ã€å¯èœæ§ããããŸã
以äžã¯dockerclientã®Doãªã¯ãšã¹ãã³ãŒãã§ãããã®ãšã©ãŒã¯ã¿ã€ã ã¢ãŠãã«ãªã£ãŠããããã§ãã
if err, ok := err.(net.Error); ok {
if err.Timeout() {
return serverResp, ErrorConnectionFailed(cli.host)
}
if !err.Temporary() {
if strings.Contains(err.Error(), "connection refused") || strings.Contains(err.Error(), "dial unix") {
return serverResp, ErrorConnectionFailed(cli.host)
}
}
}
+1
ç¥äº1.8.4
docker 17.09.1-ce
ç·šéïŒ
kube-aws 0.9.9
+1
Kubernetes v1.9.3
docker 17.12.0-ceïŒæ£åŒã«ãµããŒããããŠããªãããšã¯ããã£ãŠããŸãïŒ
weaveworks / weave- kube ïŒ2.2.0
Ubuntu 16.04.3 LTS || ã«ãŒãã«ïŒ4.4.0-112
ãã¹ã¿ãŒ+ã¯ãŒã«ãŒã䜿çšããkubeadmãä»ããã€ã³ã¹ããŒã«ïŒãã¹ã¿ãŒã¯ãã®æºåå®äº/æºåå®äºã§ãªãåäœã衚瀺ãããã¯ãŒã«ãŒã®ã¿ã衚瀺ããŸãïŒã
+1
KubernetesïŒ1.8.8
DockerïŒ1.12.6-cs13
ã¯ã©ãŠããããã€ããŒïŒGCE
OSïŒUbuntu 16.04.3 LTS
ã«ãŒãã«ïŒ4.13.0-1011-gcp
ããŒã«ã®ã€ã³ã¹ããŒã«ïŒkubeadm
ãããã¯ãŒãã³ã°ã«ãã£ã©ã³ã䜿çšããŠããŸã
ç§ã®ç°å¢ã§ã®ãã®ã³ãããä¿®æ£ã®åé¡
https://github.com/moby/moby/pull/31273/commits/8e425ebc422876ddf2ffb3beaa5a0443a6097e46
ããã¯ãdockerpshangãã«é¢ãã圹ç«ã€ãªã³ã¯ã§ãã
https://github.com/moby/moby/pull/31273
æŽæ°ïŒå®éã«docker 1.13.1ã«ããŒã«ããã¯ãããšãäžèšã®ã³ãããã¯docker1.13.1ã«ã¯ãããŸããã
+1
KubernetesïŒ1.8.9
DockerïŒ17.09.1-ce
ã¯ã©ãŠããããã€ããŒïŒAWS
OSïŒCoreOS 1632.3.0
ã«ãŒãã«ïŒ4.14.19-coreos
ããŒã«ã®ã€ã³ã¹ããŒã«ïŒkops
ãããã¯ãŒãã³ã°çšã®Calico2.6.6
ãã®åé¡ã解決ããããã«ãç§ã¯å€ãcoreosããŒãžã§ã³ïŒ1520.9.0ïŒã䜿çšããŸãã ãã®ããŒãžã§ã³ã¯docker1.12.6ã䜿çšããŸãã
ãã®å€æŽä»¥éã矜ã°ããã®åé¡ã¯ãããŸããã
+1
KubernetesïŒ1.9.3
DockerïŒ17.09.1-ce
ã¯ã©ãŠããããã€ããŒïŒAWS
OSïŒCoreOS 1632.3.0
ã«ãŒãã«ïŒ4.14.19-coreos
ããŒã«ã®ã€ã³ã¹ããŒã«ïŒkops
ç¹ã
+1
KubernetesïŒ1.9.6
DockerïŒ17.12.0-ce
OSïŒRedhat 7.4
ã«ãŒãã«ïŒ3.10.0-693.el7.x86_64
CNIïŒãã©ã³ãã«
ãåèãŸã§ã«ã ææ°ã®Kubernetes1.10ã§ã
æ€èšŒæžã¿ã®DockerããŒãžã§ã³ã¯v1.9ãšåãã§ãïŒ1.11.2ãã1.13.1ããã³17.03.x
ç§ã®å Žåã1.12.6ã«ããŒã«ããã¯ããããšã圹ã«ç«ã¡ãŸããã
åãåé¡ã芳å¯ãããŸããïŒ
Kubernetes ïŒ1.9.6
Docker ïŒ17.12.0-ce
OS ïŒUbuntu 16.04
CNI ïŒç¹ã
ãããä¿®æ£ããã®ã¯Docker17.03ãžã®ããŠã³ã°ã¬ãŒãã§ãã
åãåé¡ãçºçããŸããããDebianStrechã«ã¢ããã°ã¬ãŒãããããšã§ä¿®æ£ãããããã§ãã ã¯ã©ã¹ã¿ãŒã¯ãkopsã§ãããã€ãããAWSã§å®è¡ãããŠããŸãã
KubernetesïŒ1.8.7
DockerïŒ1.13.1
OSïŒDebian Stretch
CNIïŒCalico
ã«ãŒãã«ïŒ4.9.0-5-amd64
ããã©ã«ãã§ã¯ãDebian Jessieã¯ã«ãŒãã«ããŒãžã§ã³4.4ã§äœ¿çšãããŠãããšæããŸãããæ£åžžã«æ©èœããŠããŸããã§ããã
ãã®åé¡ã¯ENVã§çºçãããã®åé¡ã®åæãè¡ããŸãã
k8s version 1.7/1.8
ã¹ã¿ãã¯æ
å ±ã¯k8s1.7ããã®ãã®ã§ã
ãããã¯ãŒã¯ãã©ã°ã€ã³ã®ãã°ã®ãããç°å¢ã«ã¯å€æ°ã®æ¢åã®ã³ã³ããïŒ1k以äžïŒããããŸãã
kubeletãåèµ·åããkubeletã¯unhealthy
ãŸãã
ãã°ãšã¹ã¿ãã¯ããã¬ãŒã¹ããŸãã
PLEGãåãªã¹ãæäœãè¡ããšãã
åããŠã https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/pleg/generic.go#L228
åŠçããå¿
èŠã®ããå€ãã®ã€ãã³ãïŒåã³ã³ããã«ã€ãã³ãããããŸãïŒãååŸããŸã
ãã£ãã·ã¥ã®æŽæ°ã«ã¯äœåãããããŸãïŒhttps://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/pleg/generic.go#L240ïŒ
ã¹ã¿ãã¯ãå°å·ãããšãã»ãšãã©ã®å Žåãã¹ã¿ãã¯ã¯æ¬¡ã®ããã«ãªããŸãã
k8s.io/kubernetes/vendor/google.golang.org/grpc/transport.(*Stream).Header(0xc42537aff0, 0x3b53b68, 0xc42204f060, 0x59ceee0)
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/transport/transport.go:239 +0x146
k8s.io/kubernetes/vendor/google.golang.org/grpc.recvResponse(0x0, 0x0, 0x59c4c60, 0x5b0c6b0, 0x0, 0x0, 0x0, 0x0, 0x59a8620, 0xc4217f2460, ...)
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/call.go:61 +0x9e
k8s.io/kubernetes/vendor/google.golang.org/grpc.invoke(0x7ff04e8b9800, 0xc424be3380, 0x3aa3c5e, 0x28, 0x374bb00, 0xc424ca0590, 0x374bbe0, 0xc421f428b0, 0xc421800240, 0x0, ...)
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/call.go:208 +0x862
k8s.io/kubernetes/vendor/google.golang.org/grpc.Invoke(0x7ff04e8b9800, 0xc424be3380, 0x3aa3c5e, 0x28, 0x374bb00, 0xc424ca0590, 0x374bbe0, 0xc421f428b0, 0xc421800240, 0x0, ...)
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/call.go:118 +0x19c
k8s.io/kubernetes/pkg/kubelet/apis/cri/v1alpha1/runtime.(*runtimeServiceClient).PodSandboxStatus(0xc4217f6038, 0x7ff04e8b9800, 0xc424be3380, 0xc424ca0590, 0x0, 0x0, 0x0, 0xc424d92870, 0xc42204f3e8, 0x28)
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/apis/cri/v1alpha1/runtime/api.pb.go:3409 +0xd2
k8s.io/kubernetes/pkg/kubelet/remote.(*RemoteRuntimeService).PodSandboxStatus(0xc4217ec440, 0xc424c7a740, 0x40, 0x0, 0x0, 0x0)
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/remote/remote_runtime.go:143 +0x113
k8s.io/kubernetes/pkg/kubelet/kuberuntime.instrumentedRuntimeService.PodSandboxStatus(0x59d86a0, 0xc4217ec440, 0xc424c7a740, 0x40, 0x0, 0x0, 0x0)
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kuberuntime/instrumented_services.go:192 +0xc4
k8s.io/kubernetes/pkg/kubelet/kuberuntime.(*instrumentedRuntimeService).PodSandboxStatus(0xc4217f41f0, 0xc424c7a740, 0x40, 0xc421f428a8, 0x1, 0x1)
<autogenerated>:1 +0x59
k8s.io/kubernetes/pkg/kubelet/kuberuntime.(*kubeGenericRuntimeManager).GetPodStatus(0xc421802340, 0xc421dfad80, 0x24, 0xc422358e00, 0x1c, 0xc42172aa17, 0x5, 0x50a3ac, 0x5ae88e0, 0xc400000000)
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_manager.go:841 +0x373
k8s.io/kubernetes/pkg/kubelet/pleg.(*GenericPLEG).updateCache(0xc421027260, 0xc421f0e840, 0xc421dfad80, 0x24, 0xc423e86ea8, 0x1)
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/pleg/generic.go:346 +0xcf
k8s.io/kubernetes/pkg/kubelet/pleg.(*GenericPLEG).relist(0xc421027260)
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/pleg/generic.go:242 +0xbe1
k8s.io/kubernetes/pkg/kubelet/pleg.(*GenericPLEG).(k8s.io/kubernetes/pkg/kubelet/pleg.relist)-fm()
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/pleg/generic.go:129 +0x2a
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc4217c81c0)
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:97 +0x5e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc4217c81c0, 0x3b9aca00, 0x0, 0x1, 0xc420084120)
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:98 +0xbd
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc4217c81c0, 0x3b9aca00, 0xc420084120)
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:52 +0x4d
created by k8s.io/kubernetes/pkg/kubelet/pleg.(*GenericPLEG).Start
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/pleg/generic.go:129 +0x8a
åã€ãã³ãã®ã¿ã€ã ã¹ã¿ã³ããåºåããŸããkubeletãåã€ãã³ããåŠçããã®ã«çŽ1ç§ããããŸãã
ãã®ããã PLEGã¯3å以å
ã«
次ã«ã PLEGãæ£åžžã§ãªãããã
ãã®ãããPLEGã€ãã³ããã£ãã«ã¯syncLoop ïŒhttps://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kubelet.go#L1862ïŒã«ãã£ãŠæ¶è²»ãããŸããã
ãã ããPLEGã¯åŒãç¶ãã€ãã³ããåŠçããã€ãã³ããplegChannelïŒhttps://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/pleg/generic.go#L261ïŒã«éä¿¡ããŸãã
ãã£ãã«ããã£ã±ãã«ãªã£ãåŸïŒãã£ãã«å®¹éã¯1000 https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kubelet.go#L144ïŒ
PLEGã¯ã¹ã¿ãã¯ããŸãã pleg relistã®ã¿ã€ã ã¹ã¿ã³ãã¯æŽæ°ãããŸããïŒhttps://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/pleg/generic.go#L201ïŒ
ã¹ã¿ãã¯æ å ±ïŒ
goroutine 422 [chan send, 3 minutes]:
k8s.io/kubernetes/pkg/kubelet/pleg.(*GenericPLEG).relist(0xc421027260)
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/pleg/generic.go:263 +0x95a
k8s.io/kubernetes/pkg/kubelet/pleg.(*GenericPLEG).(k8s.io/kubernetes/pkg/kubelet/pleg.relist)-fm()
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/pleg/generic.go:129 +0x2a
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc4217c81c0)
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:97 +0x5e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc4217c81c0, 0x3b9aca00, 0x0, 0x1, 0xc420084120)
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:98 +0xbd
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc4217c81c0, 0x3b9aca00, 0xc420084120)
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:52 +0x4d
created by k8s.io/kubernetes/pkg/kubelet/pleg.(*GenericPLEG).Start
/mnt/tess/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/pleg/generic.go:129 +0x8a
çµäºããã³ã³ãããåé€ããkubeletãåèµ·åãããšãå ã«æ»ããŸãã
ãã®ãããããŒãã«1,000ãè¶ ããã³ã³ããããããšã
解決çã¯ãããããã£ãã·ã¥ã®æŽæ°ã䞊è¡ããŠè¡ãããšãã§ããããšã§ãïŒhttps://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/pleg/generic.go#L236ïŒ
ãŸãã¯ãã€ãã³ããåŠçãããšãã«ã¿ã€ã ã¢ãŠããèšå®ããå¿
èŠããããŸãã
@ yingnanzhang666
PLEGã®åé¡ãåå ã§ããŒããReady / NotReadyéã§ãã©ããã³ã°ãéå§ãããšãåžžã«docker inspect
ããã³ã°ã¢ããããçµäºããgcr.io/google_containers/pauseã³ã³ããã®1ã€ã«ãªããŸãã dockerããŒã¢ã³ãåèµ·åãããšãåé¡ãä¿®æ£ãããŸãã
ã¿ãªãããããã«ã¡ã¯ãCoreOS/ Docker / Kubernetesãã€ããªã®ããŸããŸãªçµã¿åããã§åé¡ãå ±åãããŠããããšãããããŸãã ç§ãã¡ã®å Žåãç§ãã¡ã¯ãŸã åãkubernetesã¹ã¿ãã¯ã«ããŸã-ïŒ1.7.10 / CoreOS / kops / AWSïŒãåé¡ã解決ãããšã¯æããŸããããæçµçã« 'tiniãå°å ¥ãããšãã«ãå¹æãã»ãŒãŒãã«æžããããšãã§ããŸãã'ïŒhttps://github.com/krallin/tiniïŒkubernetesã«ãããã€ãããDockerã€ã¡ãŒãžã®äžéšãšããŠã çŽ20ã®ç°ãªãã³ã³ãããŒïŒã¢ããªïŒããããã€ãããŠãããéåžžã«é »ç¹ã«ãããã€ãããŸãã ã€ãŸããããã¯ãå€ãã®ã·ã£ããããŠã³ãšæ°ããã¬ããªã«ã®ã¹ãã³ã¢ãããªã©ãæå³ããŸãããããã£ãŠãå±éããé »åºŠãé«ãã»ã©ããããŒããã®æºåãã§ããŠããããPLEGã«èŠèãããããšãå€ããªããŸãã 倧éšåã®ç»åã«tiniãããŒã«ã¢ãŠããããšããPIdãåãåãããããã«å¿ããŠæ®ºãããããšã確èªãããšãããã®å¯äœçšã®çºçãåæ¢ããŸããã åé¡ãšã®é¢é£æ§ãé«ããšæãã®ã§ãtiniããŸãã¯ãµãããã»ã¹ã®åãåããæ£ããåŠçã§ãããã®ä»ã®DockerããŒã¹ã€ã¡ãŒãžã確èªããããšã匷ããå§ãããŸãã ã圹ã«ç«ãŠã°å¹žãã§ãã ãã¡ãããã³ã¢ã®åé¡ã¯æ®ã£ãŠããã®ã§ãåé¡ã¯ãŸã æå¹ã§ãã
ãã®åé¡ã¯ãŸã 解決ãããŠããããåå®æçã«ã¯ã©ã¹ã¿ãŒã«åœ±é¿ãäžããŠããã®ã§ããœãªã¥ãŒã·ã§ã³ã®äžéšã«ãªããããŒããã©ããã³ã°ã®åœ±é¿ãåããããŒããèªåçã«ä¿®åŸ©ã§ããã«ã¹ã¿ã ãªãã¬ãŒã¿ãŒã®éçºã«åãæãããããšæããŸãã PLEG is not healthy
ããçš®ã®äžè¬çãªèªå修埩æŒç®åãä»ããã¯ãNode ProblemDetectorãªããžããªã®ãã®æªè§£æ±ºã®åé¡ããæ¥ãŠPLEG is not healthy
ãkubeletãã°ã«è¡šç€ºããå§ãããã³ã«ã PLEGNotHealthy
ããŒãæ¡ä»¶ãtrueã«èšå®ããNode ProblemDetectorã䜿çšããŠã«ã¹ã¿ã ã¢ãã¿ãŒãæ§æããŸããã 次ã®ã¹ãããã¯ã PLEGNotHealthy
ãªã©ãç°åžžãªããŒãã瀺ãããŒãã®ç¶æ
ããã§ãã¯ããããŒãäžã®dockerããŒã¢ã³ãã³ãŒãã³ããšãã¯ããããã³åèµ·åããèªååããã修埩ã·ã¹ãã ã§ãïŒãŸãã¯ãäžããããæ¡ä»¶ïŒã éçºããããªãã¬ãŒã¿ãŒã®ãªãã¡ã¬ã³ã¹ãšããŠCoreOSUpdateOperatorãèŠãŠããŸãã ä»ã®èª°ããããã«ã€ããŠèããŠãããã©ããããŸãã¯ãã®åé¡ã«é©çšã§ããèªå修埩ãœãªã¥ãŒã·ã§ã³ããã§ã«ãŸãšããŠãããã©ãããç¥ãããã§ãã ç³ãèš³ãããŸããããããã¯ãã®ãã£ã¹ã«ãã·ã§ã³ã«é©ãããã©ãŒã©ã ã§ã¯ãããŸããã
ç§ãã¡ã®å Žåã2åéPodSandboxStatus()
ã§ã¹ã¿ãã¯ããkubeletåºåãçºçããããšããããŸãã
rpc error: code = 4 desc = context deadline exceeded
ã«ãŒãã«åºåïŒ
unregister_netdevice: waiting for eth0 to become free. Usage count = 1
ãã ããç¹å®ã®ãããã®åé€ïŒãããã¯ãŒã¯ãã©ãã£ãã¯ãå€ãå ŽåïŒã§çºçããã ãã§ãã
ãŸããPodSpecãµã³ãããã¯ã¹ã¯æåãåæ¢ããŸãããäžæåæ¢ãµã³ãããã¯ã¹ã®åæ¢ã¯å€±æããŸããïŒæ°žä¹
ã«å®è¡ãããŸãïŒã 次ã«ãåããµã³ãããã¯ã¹IDã§ã¹ããŒã¿ã¹ããã§ãããããšãåžžã«ããã§ã¹ã¿ãã¯ããŸãã
ãã®çµæã-> PLEGã¬ã€ãã³ã·ãŒãé«ã-> PLEGãäžå¥å
šïŒ2ååŒã³åºãã2å* 2 = 4å> 3åïŒ-> NodeNotReady
docker_sandbox.go
é¢é£ã³ãŒãïŒ
func (ds *dockerService) PodSandboxStatus(podSandboxID string) (*runtimeapi.PodSandboxStatus, error) {
// Inspect the container.
// !!! maybe stuck here for 2 min !!!
r, err := ds.client.InspectContainer(podSandboxID)
if err != nil {
return nil, err
}
...
}
func (ds *dockerService) StopPodSandbox(podSandboxID string) error {
var namespace, name string
var checkpointErr, statusErr error
needNetworkTearDown := false
// Try to retrieve sandbox information from docker daemon or sandbox checkpoint
// !!! maybe stuck here !!!
status, statusErr := ds.PodSandboxStatus(podSandboxID)
...
ããã¡ããŠã¹ã®ç£èŠã«ãããšãDockerã®æ€æ»ã®åŸ
ã¡æéã¯æ£åžžã§ãããkubeletã®å®è¡æã®æ€æ»/åæ¢æäœã«æéãããããããŸãã
DockerããŒãžã§ã³ïŒ1.12.6
kubeletããŒãžã§ã³ïŒ1.7.12
Linuxã«ãŒãã«ããŒãžã§ã³ïŒ4.4.0-72-generic
CNIïŒãã£ã©ã³
@yujuhongãèšåããããã«ïŒ
grpc http grpc
kubelet <----> dockershim <----> dockerd <----> containerd
ç¶æ³ãçºçãããšããç§ã¯docker ps
ãå®è¡ããããšããŸãã ã§ããŸãã curl
ãã/var/run/docker.sock
äžæåæ¢ã³ã³ããã®jsonãååŸããããšãã§ããŸãã kubeletãšdockershimã®éã®grpcå¿çã®åé¡ãªã®ã ãããïŒ
curl --unix-socket /var/run/docker.sock http:/v1.24/containers/66755504b8dc3a5c17454e04e0b74676a8d45089a7e522230aad8041ab6f3a5a/json
PLEGã®åé¡ãåå ã§ããŒããReady / NotReadyéã§ãã©ããã³ã°ãéå§ãããšãåžžã«ãdockerinspectããã³ã°ã¢ããããçµäºããgcr.io/google_containers/pauseã³ã³ãããŒã®1ã€ã«ãªããŸãã dockerããŒã¢ã³ãåèµ·åãããšãåé¡ãä¿®æ£ãããŸãã
ç§ãã¡ã®ã±ãŒã¹ã¯@erstaplesã®èª¬æã«äŒŒãŠããããã§ãã dockerdãåèµ·åãã代ããã«ããã³ã°ããŠããäžæåæ¢ã³ã³ãããdocker stop
ïŒ docker rm
ã ãã§è§£æ±ºã§ãããšæããŸãã
ããŒãã§dmesg
ãå®è¡ãããšã unregister_netdevice: waiting for eth0 to become free. Usage count = 1
ãšã©ãŒã衚瀺ãããŸãã ã·ã¹ãã ããããã¯ãŒã¯ããã€ã¹ã解æŸã§ããªããããããããçµäºããããšã¯ãããŸããã ããã«ããã journalctl -u kubelet
PodSandboxStatus of sandbox "XXX" for pod "YYY" error: rpc error: code = DeadlineExceeded desc = context deadline exceeded
ãšã©ãŒãçºçããŸãã
Kubernetesãããã¯ãŒã¯ãã©ã°ã€ã³ã«é¢é£ããŠããå¯èœæ§ããããŸããïŒ ãã®ã¹ã¬ããã®äœäººãã®äººã ã¯Calicoã䜿çšããŠããããã§ãã å€åããã¯ããã«ãããã®ã§ããïŒ
@deitchããã§CoreOSã®åé¡ã«ã€ããŠäœãèšã
ããã§ãåãåé¡ã«çŽé¢ããŠããŸããã768Gbã®RAMã®ãã¢ã¡ã¿ã«ããŒãã§ãã¹ãããŠããŸãã 2kãè¶ ããç»åãèªã¿èŸŒãŸããŠããŸãïŒãã®ãã¡ã®ããã€ããåé€ããŠããŸãïŒã
k8s1.7.15ãšDocker17.09ã䜿çšããŠããŸãã ããã§ããã€ãã®ã³ã¡ã³ãã«èšèŒãããŠããããã«ããããDocker 1.13ã«æ»ãããšãèããŠããŸãããããã§åé¡ã解決ãããã©ããã¯ããããŸããã
ãã³ãã£ã³ã°ãã¹ã€ããã®1ã€ãšã®æ¥ç¶ã倱ããªã©ãããå ·äœçãªåé¡ãããã€ããããŸããããããCoreOSãããã¯ãŒã¯ã®åé¡ãšã©ã®ããã«é¢é£ããŠãããã¯ããããŸããã
ãŸããkubeletãšdockerã¯å€ãã®CPUæéãè²»ãããŠããŸãïŒã·ã¹ãã å ã®ä»ã®äœãããïŒ
ããããšãïŒ
ããã¯Kubernetesv1.8.7ãšcalicov2.8.6ã§ç¢ºèªã§ããŸãã ãã®å Žåãäžéšã®ãããã¯Terminating
ç¶æ
ã§ã¹ã¿ãã¯ããKubeletã¯PLEG
ãšã©ãŒãã¹ããŒããŸãã
E0515 16:15:34.039735 1904 generic.go:241] PLEG: Ignoring events for pod myapp-5c7f7dbcf7-xvblm/production: rpc error: code = DeadlineExceeded desc = context deadline exceeded
I0515 16:16:34.560821 1904 kubelet.go:1779] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m0.529418824s ago; threshold is 3m0s]
I0515 16:16:39.561010 1904 kubelet.go:1779] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m5.529605547s ago; threshold is 3m0s]
I0515 16:16:41.857069 1904 kubelet_node_status.go:791] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2018-05-15 16:16:41.857046605 +0000 UTC LastTransitionTime:2018-05-15 16:16:41.857046605 +0000 UTC Reason:KubeletNotReady Message:PLEG is not healthy: pleg was last seen active 3m7.825663114s ago; threshold is 3m0s}
I0515 16:16:44.561281 1904 kubelet.go:1779] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m10.52986717s ago; threshold is 3m0s]
I0515 16:16:49.561499 1904 kubelet.go:1779] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m15.530093202s ago; threshold is 3m0s]
I0515 16:16:54.561740 1904 kubelet.go:1779] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m20.530326452s ago; threshold is 3m0s]
I0515 16:16:59.561943 1904 kubelet.go:1779] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m25.530538095s ago; threshold is 3m0s]
I0515 16:17:04.562205 1904 kubelet.go:1779] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m30.530802216s ago; threshold is 3m0s]
I0515 16:17:09.562432 1904 kubelet.go:1779] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m35.531029395s ago; threshold is 3m0s]
I0515 16:17:14.562644 1904 kubelet.go:1779] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m40.531229806s ago; threshold is 3m0s]
I0515 16:17:19.562899 1904 kubelet.go:1779] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m45.531492495s ago; threshold is 3m0s]
I0515 16:17:24.563168 1904 kubelet.go:1779] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m50.531746392s ago; threshold is 3m0s]
I0515 16:17:29.563422 1904 kubelet.go:1779] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m55.532013675s ago; threshold is 3m0s]
I0515 16:17:34.563740 1904 kubelet.go:1779] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 4m0.532327398s ago; threshold is 3m0s]
E0515 16:17:34.041174 1904 generic.go:271] PLEG: pod myapp-5c7f7dbcf7-xvblm/production failed reinspection: rpc error: code = DeadlineExceeded desc = context deadline exceeded
docker ps
ãå®è¡ãããšããããmyapp-5c7f7dbcf7-xvblm
ã®pause
ã³ã³ããã®ã¿ã衚瀺ãããŸãã
ip-10-72-160-222 core # docker ps | grep myapp-5c7f7dbcf7-xvblm
c6c34d9b1e86 gcr.io/google_containers/pause-amd64:3.0 "/pause" 9 hours ago Up 9 hours k8s_POD_myapp-5c7f7dbcf7-xvblm_production_baa0e029-5810-11e8-a9e8-0e88e0071844_0
kubelet
åèµ·åããåŸããŸã³ãpause
ã³ã³ããïŒid c6c34d9b1e86
ïŒãåé€ãããŸããã kubelet
ãã°ïŒ
W0515 16:56:26.439306 79462 docker_sandbox.go:343] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "myapp-5c7f7dbcf7-xvblm_production": CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "c6c34d9b1e86be38b41bba5ba60e1b2765584f3d3877cd6184562707d0c2177b"
W0515 16:56:26.439962 79462 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "c6c34d9b1e86be38b41bba5ba60e1b2765584f3d3877cd6184562707d0c2177b"
2018-05-15 16:56:26.428 [INFO][79799] calico-ipam.go 249: Releasing address using handleID handleID="k8s-pod-network.c6c34d9b1e86be38b41bba5ba60e1b2765584f3d3877cd6184562707d0c2177b" workloadID="production.myapp-5c7f7dbcf7-xvblm"
2018-05-15 16:56:26.428 [INFO][79799] ipam.go 738: Releasing all IPs with handle 'k8s-pod-network.c6c34d9b1e86be38b41bba5ba60e1b2765584f3d3877cd6184562707d0c2177b'
2018-05-15 16:56:26.739 [INFO][81206] ipam.go 738: Releasing all IPs with handle 'k8s-pod-network.c6c34d9b1e86be38b41bba5ba60e1b2765584f3d3877cd6184562707d0c2177b'
2018-05-15 16:56:26.742 [INFO][81206] ipam.go 738: Releasing all IPs with handle 'production.myapp-5c7f7dbcf7-xvblm'
2018-05-15 16:56:26.742 [INFO][81206] calico-ipam.go 261: Releasing address using workloadID handleID="k8s-pod-network.c6c34d9b1e86be38b41bba5ba60e1b2765584f3d3877cd6184562707d0c2177b" workloadID="production.myapp-5c7f7dbcf7-xvblm"
2018-05-15 16:56:26.742 [WARNING][81206] calico-ipam.go 255: Asked to release address but it doesn't exist. Ignoring handleID="k8s-pod-network.c6c34d9b1e86be38b41bba5ba60e1b2765584f3d3877cd6184562707d0c2177b" workloadID="production.myapp-5c7f7dbcf7-xvblm"
Calico CNI releasing IP address
2018-05-15 16:56:26.745 [INFO][80545] k8s.go 379: Teardown processing complete. Workload="production.myapp-5c7f7dbcf7-xvblm"
ã«ãŒãã«ãã°ããïŒ
[40473.123736] unregister_netdevice: waiting for eth0 to become free. Usage count = 1
[40483.187768] unregister_netdevice: waiting for eth0 to become free. Usage count = 1
[40493.235781] unregister_netdevice: waiting for eth0 to become free. Usage count = 1
åæ§ã®ãã±ãããéããŠãããšæããŸãhttps://github.com/moby/moby/issues/5618
ããã¯ãŸã£ããå¥ã®ã±ãŒã¹ã§ãã ããã§ãããŒãããã©ããã³ã°ããŠããçç±ãããããŸãã
ãã®åé¡ã«ãããæ¬çªã¯ã©ã¹ã¿ãŒã®ããŒããããŠã³ããŸãã ããããçµäºãŸãã¯äœæããããšã¯ã§ããŸããã Linuxã«ãŒãã«4.14.32ããã³Docker17.12.1-ceäžã®CoreOS1688.5.3ïŒRhyoliteïŒã䜿çšããKubernetes1.9.7ã ç§ãã¡ã®CNIã¯Calicoã§ãã
containerdã®ãã°ã«ã¯ãåé€ãèŠæ±ãããcgroupã«é¢ããããã€ãã®ãšã©ãŒã衚瀺ãããŸããããšã©ãŒã®ååŸã«ã¯çŽæ¥è¡šç€ºãããŸããã
May 21 17:35:00 ip-10-5-76-113.ap-southeast-1.compute.internal env[1282]: time="2018-05-21T17:35:00Z" level=error msg="stat cgroup bf717dbbf392b0ba7ef0452f7b90c4cfb4eca81e7329bfcd07fe020959b737df" error="cgroups: cgroup deleted"
May 21 17:44:32 ip-10-5-76-113.ap-southeast-1.compute.internal env[1282]: time="2018-05-21T17:44:32Z" level=error msg="stat cgroup a0887b496319a09b1f3870f1c523f65bf9dbfca19b45da73711a823917fdfa18" error="cgroups: cgroup deleted"
May 21 17:50:32 ip-10-5-76-113.ap-southeast-1.compute.internal env[1282]: time="2018-05-21T17:50:32Z" level=error msg="stat cgroup 2fbb4ba674050e67b2bf402c76137347c3b5f510b8934d6a97bc3b96069db8f8" error="cgroups: cgroup deleted"
May 21 17:56:22 ip-10-5-76-113.ap-southeast-1.compute.internal env[1282]: time="2018-05-21T17:56:22Z" level=error msg="stat cgroup f9501a4284257522917b6fae7e9f4766e5b8cf7e46989f48379b68876d953ef2" error="cgroups: cgroup deleted"
May 21 18:43:28 ip-10-5-76-113.ap-southeast-1.compute.internal env[1282]: time="2018-05-21T18:43:28Z" level=error msg="stat cgroup c37e7505019ae279941a7a78db1b7a6e7aab4006dfcdd83d479f1f973d4373d2" error="cgroups: cgroup deleted"
May 21 19:38:28 ip-10-5-76-113.ap-southeast-1.compute.internal env[1282]: time="2018-05-21T19:38:28Z" level=error msg="stat cgroup a327a775955d2b69cb01921beb747b4bba0df5ea79f637e0c9e59aeb7e670b43" error="cgroups: cgroup deleted"
May 21 19:50:26 ip-10-5-76-113.ap-southeast-1.compute.internal env[1282]: time="2018-05-21T19:50:26Z" level=error msg="stat cgroup 5d11f13d13b461fe2aa1396d947f1307a6c3a78e87fa23d4a1926a6d46794d58" error="cgroups: cgroup deleted"
May 21 19:52:26 ip-10-5-76-113.ap-southeast-1.compute.internal env[1282]: time="2018-05-21T19:52:26Z" level=error msg="stat cgroup fb7551cde0f9a640fbbb928d989ca84200909bce2821e03a550d5bfd293e786b" error="cgroups: cgroup deleted"
May 21 20:54:32 ip-10-5-76-113.ap-southeast-1.compute.internal env[1282]: time="2018-05-21T20:54:32Z" level=error msg="stat cgroup bcd1432a64b35fd644295e2ae75abd0a91cb38a9fa0d03f251c517c438318c53" error="cgroups: cgroup deleted"
May 21 21:56:28 ip-10-5-76-113.ap-southeast-1.compute.internal env[1282]: time="2018-05-21T21:56:28Z" level=error msg="stat cgroup 2a68f073a7152b4ceaf14d128f9d31fbb2d5c4b150806c87a640354673f11792" error="cgroups: cgroup deleted"
May 21 22:02:30 ip-10-5-76-113.ap-southeast-1.compute.internal env[1282]: time="2018-05-21T22:02:30Z" level=error msg="stat cgroup aa2224e7cfd0a6f44b52ff058a50a331056b0939d670de461b7ffc7d01bc4d59" error="cgroups: cgroup deleted"
May 21 22:18:32 ip-10-5-76-113.ap-southeast-1.compute.internal env[1282]: time="2018-05-21T22:18:32Z" level=error msg="stat cgroup 95e0c4f7607234ada85a1ab76b7ec2aa446a35e868ad8459a1cae6344bc85f4f" error="cgroups: cgroup deleted"
May 21 22:21:32 ip-10-5-76-113.ap-southeast-1.compute.internal env[1282]: time="2018-05-21T22:21:32Z" level=error msg="stat cgroup 76578ede18ba3bc1307d83c4b2ccd7e35659f6ff8c93bcd54860c9413f2f33d6" error="cgroups: cgroup deleted"
Kubeletã¯ãããããµã³ãããã¯ã¹æäœã®å€±æã«é¢ããããã€ãã®èå³æ·±ãè¡ã瀺ããŠããŸãã
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: E0523 18:17:25.578306 1513 remote_runtime.go:115] StopPodSandbox "922f625ced6d6f6adf33fe67e5dd8378040cd2e5c8cacdde20779fc692574ca5" from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: E0523 18:17:25.578354 1513 kuberuntime_manager.go:800] Failed to stop sandbox {"docker" "922f625ced6d6f6adf33fe67e5dd8378040cd2e5c8cacdde20779fc692574ca5"}
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: W0523 18:17:25.579095 1513 docker_sandbox.go:196] Both sandbox container and checkpoint for id "a893f57acec1f3779c35aed743f128408e491ff2f53a312895fe883e2c68d642" could not be found. Proceed without further sandbox information.
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: W0523 18:17:25.579426 1513 cni.go:242] CNI failed to retrieve network namespace path: Error: No such container: a893f57acec1f3779c35aed743f128408e491ff2f53a312895fe883e2c68d642
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: 2018-05-23 18:17:25.723 [INFO][33881] calico.go 338: Extracted identifiers ContainerID="a893f57acec1f3779c35aed743f128408e491ff2f53a312895fe883e2c68d642" Node="ip-10-5-76-113.ap-southeast-1.compute.internal" Orchestrator="cni" Workload="a89
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: 2018-05-23 18:17:25.723 [INFO][33881] utils.go 263: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=a893f57acec1f3779c35aed743f128408e491ff2f53a312895fe883e2c68d642 CNI_NETNS= CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESP
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: 2018-05-23 18:17:25.723 [INFO][33881] client.go 202: Loading config from environment
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: Calico CNI releasing IP address
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: 2018-05-23 18:17:25.796 [INFO][33905] utils.go 263: Configured environment: [CNI_COMMAND=DEL CNI_CONTAINERID=a893f57acec1f3779c35aed743f128408e491ff2f53a312895fe883e2c68d642 CNI_NETNS= CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESP
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: 2018-05-23 18:17:25.796 [INFO][33905] client.go 202: Loading config from environment
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: 2018-05-23 18:17:25.796 [INFO][33905] calico-ipam.go 249: Releasing address using handleID handleID="k8s-pod-network.a893f57acec1f3779c35aed743f128408e491ff2f53a312895fe883e2c68d642" workloadID="a893f57acec1f3779c35aed743f128408e491ff2f53a3
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: 2018-05-23 18:17:25.796 [INFO][33905] ipam.go 738: Releasing all IPs with handle 'k8s-pod-network.a893f57acec1f3779c35aed743f128408e491ff2f53a312895fe883e2c68d642'
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: 2018-05-23 18:17:25.805 [WARNING][33905] calico-ipam.go 255: Asked to release address but it doesn't exist. Ignoring handleID="k8s-pod-network.a893f57acec1f3779c35aed743f128408e491ff2f53a312895fe883e2c68d642" workloadID="a893f57acec1f3779c3
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: 2018-05-23 18:17:25.805 [INFO][33905] calico-ipam.go 261: Releasing address using workloadID handleID="k8s-pod-network.a893f57acec1f3779c35aed743f128408e491ff2f53a312895fe883e2c68d642" workloadID="a893f57acec1f3779c35aed743f128408e491ff2f53
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: 2018-05-23 18:17:25.805 [INFO][33905] ipam.go 738: Releasing all IPs with handle 'a893f57acec1f3779c35aed743f128408e491ff2f53a312895fe883e2c68d642'
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: 2018-05-23 18:17:25.822 [INFO][33881] calico.go 373: Endpoint object does not exist, no need to clean up. Workload="a893f57acec1f3779c35aed743f128408e491ff2f53a312895fe883e2c68d642" endpoint=api.WorkloadEndpointMetadata{ObjectMetadata:unver
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: E0523 18:17:25.824925 1513 kubelet.go:1527] error killing pod: failed to "KillPodSandbox" for "9c246b32-4f10-11e8-964a-0a7e4ae265be" with KillPodSandboxError: "rpc error: code = DeadlineExceeded desc = context deadline exceeded"
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: E0523 18:17:25.825025 1513 pod_workers.go:186] Error syncing pod 9c246b32-4f10-11e8-964a-0a7e4ae265be ("flntk8-fl01-j7lf4_splunk(9c246b32-4f10-11e8-964a-0a7e4ae265be)"), skipping: error killing pod: failed to "KillPodSandbo
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: E0523 18:17:25.969591 1513 kuberuntime_manager.go:860] PodSandboxStatus of sandbox "922f625ced6d6f6adf33fe67e5dd8378040cd2e5c8cacdde20779fc692574ca5" for pod "flntk8-fl01-j7lf4_splunk(9c246b32-4f10-11e8-964a-0a7e4ae265be)"
May 23 18:17:25 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: E0523 18:17:25.969640 1513 generic.go:241] PLEG: Ignoring events for pod flntk8-fl01-j7lf4/splunk: rpc error: code = DeadlineExceeded desc = context deadline exceeded
May 23 18:20:27 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: I0523 18:20:27.753523 1513 kubelet.go:1790] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m0.783603773s ago; threshold is 3m0s]
May 23 18:19:27 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: E0523 18:19:27.019252 1513 kuberuntime_manager.go:860] PodSandboxStatus of sandbox "922f625ced6d6f6adf33fe67e5dd8378040cd2e5c8cacdde20779fc692574ca5" for pod "flntk8-fl01-j7lf4_splunk(9c246b32-4f10-11e8-964a-0a7e4ae265be)"
May 23 18:19:27 ip-10-5-76-113.ap-southeast-1.compute.internal kubelet[1513]: E0523 18:19:27.019295 1513 generic.go:241] PLEG: Ignoring events for pod flntk8-fl01-j7lf4/splunk: rpc error: code = DeadlineExceeded desc = context deadline exceeded
ã«ãŒãã«ã¯ã以äžã«é¢é£ããŠããããã«èŠããããªãŒã©ã€ã³ã«ãªãã®ãåŸ ã£ãŠããeth0ã瀺ããŠããŸãïŒ //github.com/moby/moby/issues/5618
[1727395.220036] unregister_netdevice: waiting for eth0 to become free. Usage count = 1
[1727405.308152] unregister_netdevice: waiting for eth0 to become free. Usage count = 1
[1727415.404335] unregister_netdevice: waiting for eth0 to become free. Usage count = 1
[1727425.484491] unregister_netdevice: waiting for eth0 to become free. Usage count = 1
[1727435.524626] unregister_netdevice: waiting for eth0 to become free. Usage count = 1
[1727445.588785] unregister_netdevice: waiting for eth0 to become free. Usage count = 1
ãã ãããã®ã±ãŒã¹ã§ã¯ãã¢ããã¿ãŒlo
ã¯è¡šç€ºããããã«ãŒãã«ã¯ã¯ã©ãã·ã¥ããŸããã§ããã ãããªã調æ»ã¯https://github.com/projectcalico/calico/issues/1109ãææããŠãããããã¯ãŸã ä¿®æ£ãããŠããªãã«ãŒãã«ã®ç«¶åç¶æ
ã®ãã°ã§ãããšçµè«ä»ããŠããŸãã
kubeletãåèµ·åãããšãããããçµäºããŠäœæãããã®ã«ååãªåé¡ãä¿®æ£ãããŸãããã waiting for eth0 to become free
ã¹ãã ãdmesgã§ç¶ç¶ããŸããã
ãã®åé¡ã«é¢ããèå³æ·±ãèªã¿ç©ã¯æ¬¡ã®ãšããã§ãïŒ https ïŒ//medium.com/@bcdonadio/when -the-blue-whale-sinks-55c40807c2fc
@integrii
ããããææ°ã®centOSã§ãçºçããŸãã äžåºŠåçŸããŠããããŸããã
ããŠãç§ã¯ä»¥åã«èšã£ãããšãå€æŽããããšæããŸã-ã³ã³ããã©ã³ã¿ã€ã ã¯çªç¶ããŠã³ããŠæå¥ãèšããŸã
ãããã®åæãã¹ããããã-[PLEGã¯æ£åžžã§ã¯ãããŸããïŒ..ã
dockerããã¡ã€ã«ãå®è¡ããŠããéã ãã®éã«ãkubeletãåèµ·åãããšãPLEGãæ£åžžã«ãªããããŒããåã³çšŒåããŸãã
dockerãkubeletkube-proxyã¯ãã¹ãŠRTåªå 床ã«èšå®ãããŠããŸãã
ãã1ã€ãkubeletãåèµ·åãããšãdockerãåèµ·åããªãéãåãããšãèµ·ãããŸãã
Dockerã®ãœã±ããã§curlã䜿çšããŠã¿ãŸããããæ£åžžã«æ©èœããŠããŸãã
+1
KubernetesïŒ1.10.2
DockerïŒ1.12.6
OSïŒcentos 7.4
ã«ãŒãã«ïŒ3.10.0-693.el7.x86_64
CNIïŒã«ãªã³
+1
ç¥äºïŒ1.7.16
DockerïŒ17.12.1-ce
OSïŒCoreOS 1688.5.3
ã«ãŒãã«ïŒ4.14.32-coreos
CNIïŒCalicoïŒv2.6.7ïŒ
v1.9.1以é
--runtime-request-timeoutãå¢ãããšåœ¹ç«ã€ãšæããŸããïŒ
ããŒãã®1ã€ã§CRI-Oã§ãã®åé¡ãçºçããŠããŸãã Kubernetes 1.10.1ãCRI-O 1.10.1ãFedora 27ãã«ãŒãã«4.16.7-200.fc27ãFlannelã䜿çšã
runc list
ãšcrictl pods
ã¯ã©ã¡ããé«éã§ããã crictl ps
å®è¡ã«ã¯æ°åããããŸãã
+1
KubernetesïŒv1.8.7 + coreos.0
DockerïŒ17.05.0-ce
OSïŒRedhat 7x
CNIïŒCalico
Kubespary 2.4
ãã®åé¡ã¯é »ç¹ã«çºçããŸãã dockerãškubeletãåèµ·åãããšãæ¶ããŸãã
ææ°ã®å®å®ããCoreOS 1745.7.0
ã§ã¯ããã®åé¡ã¯çºçããªããªããŸããã
@komljenãæŽæ°ããŠããã©ããããèŠãŠããŸããïŒ ç§ãã¡ã«ãšã£ãŠãããã¯çºçããã®ã«ãã°ããæéãããããŸãã
1ã€ã®å€§èŠæš¡ãªCIç°å¢ã§æ°æ¥ããã«ãããã®åé¡ãçºçããŸãããããã¹ãŠãè©ŠããŠãæåããªãã£ããšæããŸãã OSãCoreOS以äžã®ããŒãžã§ã³ã«å€æŽããããšãéèŠã§ããã1ãæéåé¡ã¯çºçããŠããŸããã
ç§ã1ãæ以äžãã®åé¡ãèŠãŠããŸããã äœãå€æŽããªãã®ã§ãç§ã¯æ£è ãå¥åº·ã§ãããšå®£èšããã®ã¯ããã»ã©éããããŸãã:-)
@komljen centos7ãå®è¡ããŸããä»æ¥ã§ãããŒãã®1ã€ãããŠã³ããŸããã
ç§ã1ãæ以äžãã®åé¡ãèŠãŠããŸããã äœãå€æŽããªãã®ã§ãç§ã¯æ£è ãå¥åº·ã§ãããšå®£èšããã®ã¯ããã»ã©éããããŸãã:-)
@oivindohãã®ç¹å®ã®ã«ãŒãã«ããŒãžã§ã³ã§äœãå€æŽããããã確èªããæéããããŸããã§ããããç§ã®å Žåã¯åé¡ã解決ããŸããã
ãã®åé¡ã®åå ã¯ã¯ã©ã¹ã¿ãŒã§èŠã€ãããŸããã èŠçŽãããšããã®ãã°ã¯ãçµäºããªãCNIã³ãã³ãïŒcalicoïŒãåå ã§çºçããŸããããã«ãããdockershimãµãŒããŒãã³ãã©ãŒãæ°žä¹
ã«ã¹ã¿ãã¯ããŸãã ãã®çµæãRPCã¯äžè¯ãããã«å¯ŸããŠPodSandboxStatus()
ãåŒã³åºããšãåžžã«ã¿ã€ã ã¢ãŠãã«ãªããPLEGãç°åžžã«ãªããŸãã
ãã°ã®åœ±é¿ïŒ
Terminating
ç¶æ
ã§æ°žé ã«ç«ã¡åŸçãããçºçãããšãã«ããŒãã«è¡šç€ºããããã®ã¯æ¬¡ã®ãšããã§ãã
Jul 13 23:52:15 E0713 23:52:15.461144 1740 kuberuntime_manager.go:860] PodSandboxStatus of sandbox "01d8b790bc9ede72959ddf0669e540dfb1f84bfd252fb364770a31702d9e7eeb" for pod "pod-name" error: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul 13 23:52:15 E0713 23:52:15.461215 1740 generic.go:241] PLEG: Ignoring events for pod pod-name: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul 13 23:52:16 E0713 23:52:16.682555 1740 pod_workers.go:186] Error syncing pod 7f3fd634-7e57-11e8-9ddb-0acecd2e6e42 ("pod-name"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul 13 23:53:15 I0713 23:53:15.682254 1740 kubelet.go:1790] skipping pod synchronization - [PLEG is not healthy: pleg was last seen active 3m0.267402933s ago; threshold is 3m0s]
$ curl -s http://localhost:10255/metrics | grep 'quantile="0.5"' | grep "e+08"
kubelet_pleg_relist_interval_microseconds{quantile="0.5"} 2.41047643e+08
kubelet_pleg_relist_latency_microseconds{quantile="0.5"} 2.40047461e+08
kubelet_runtime_operations_latency_microseconds{operation_type="podsandbox_status",quantile="0.5"} 1.2000027e+08
$ ps -A -o pid,ppid,start_time,comm | grep 1740
1740 1 Jun15 kubelet
5428 1740 Jul04 calico
dockershimãµãŒããŒã®ã¹ã¿ãã¯ãã¬ãŒã¹ã¯æ¬¡ã®ãšããã§ãã
PodSandboxStatus() :: pkg/kubelet/dockershim/docker_sandbox.go
... -> GetPodNetworkStatus() :: pkg/kubelet/network/plugins.go
^^^^^ this function stuck on pm.podLock(fullPodName).Lock()
ãã®åé¡ãä¿®æ£ããã«ã¯ãkubeletã¯ãCNIã©ã€ãã©ãªé¢æ°åŒã³åºãïŒ DelNetwork()
ãªã©ïŒããã³æ»ãã®ã«æ°žé ã«ãããå¯èœæ§ã®ãããã®ä»ã®å€éšã©ã€ãã©ãªåŒã³åºãã§ã¿ã€ã ã¢ãŠãã䜿çšããå¿
èŠããããŸãã
@mechpen誰ããã©ããã§çããèŠã€ããŠãããŠããããã§ãã ããã§ã¯åœãŠã¯ãŸããªããšæããŸãïŒå°ãªããšããã®ã¯ã©ã¹ã¿ãŒã§ã¯ãcalicoã§ã¯ãªãweaveã䜿çšããŠããŸããä»ã®å Žæã§calicoã䜿çšããŠããããã®ãã«ãã¢ãŒããé§åããŠããŸãïŒãåæ§ã®ãšã©ãŒã¡ãã»ãŒãžã¯è¡šç€ºãããŠããŸããã
ãã ãã衚瀺ãããå Žåã¯ã次ã®ããã«è¿°ã¹ãŠããŸãã
ãã®åé¡ãä¿®æ£ããã«ã¯ãkubeletã¯CNIã©ã€ãã©ãªé¢æ°åŒã³åºãïŒDelNetworkïŒïŒãªã©ïŒãŸãã¯æ»ãã®ã«æ°žé ã«ãããå¯èœæ§ã®ããå€éšã©ã€ãã©ãªåŒã³åºãã§ã¿ã€ã ã¢ãŠãã䜿çšããå¿ èŠããããŸã
æ§æå¯èœã§ããïŒ ãŸãã¯kubelet
å€æŽãå¿
èŠã§ããïŒ
@deitchãã®ãšã©ãŒã¯ãweave CNIã³ãã³ããçµäºããªãå Žåã«ãçºçããå¯èœæ§ããããŸãïŒãã¹ãŠã®ã·ã¹ãã ã§å ±æãããäœã¬ãã«ã®ãã°ãåå ã§ããå¯èœæ§ããããŸãïŒã
ä¿®æ£ã«ã¯ãkubeletã³ãŒãã®å€æŽãå¿ èŠã§ãã
@mechpenãã®åé¡ã¯ããã©ã³ãã«ã§å®è¡ãããŠããã¯ã©ã¹ã¿ãŒã§ãçºçããŸããïŒ ä¿®æ£ã¯åãã§ããïŒ
@komljen 1745.7.0
ãã®åé¡ãèŠãã°ããã§ã
çŸåšk8s1.9ã§calico
ãã®åé¡ãçºçããŠããŸã
ãã®æ£ç¢ºãªããŒãã«ãçµäºã§ã¹ã¿ãã¯ããŠãããããããããŸãã ããã匷å¶çã«æ®ºããŠãåé¡ãæ¢ãŸããã©ããèŠãŠã¿ãŸãããã
@mechpenææ¡ã®ããã«k8sã®åé¡ãéããŸãããïŒ
@mechpenãŸãã
@sstarcherãŸã ãã±ãããæåºããŠããŸããã ãŸã ã«ãªã³ãæ°žé ã«ãã³ã°ããçç±ãèŠã€ããããšããŠããŸãã
ã«ãŒãã«ã¡ãã»ãŒãžããããã衚瀺ãããŸãã
[2797545.570844] unregister_netdevice: waiting for eth0 to become free. Usage count = 2
ãã®ãšã©ãŒã¯äœå¹Žãã®élinux / containerãæ©ãŸããŠããŸããã
@mechpen
@sstarcher
@deitch
ã¯ãããã®åé¡ã¯1ãæåã«çºçããŸããã
ãããŠãç§ã¯ãããçºè¡ããŸããã
kubeletã§ãã®åé¡ãä¿®æ£ããããšããŠããŸãããæåã«cniã§ä¿®æ£ããå¿
èŠããããŸãã
ã ããç§ã¯æåã«cniã§ä¿®æ£ãã次ã«kubeletã§ä¿®æ£ããŸãã
THX
ïŒ65743
https://github.com/containernetworking/cni/issues/567
https://github.com/containernetworking/cni/pull/568
ãã®åé¡ã«é¢é£ãã@ sstarcher @ mechpen calicoãã±ããïŒ
https://github.com/projectcalico/calico/issues/1109
@mechpenã®åé¡ã«ã€ããŠã¯ã httpsïŒ //github.com/moby/moby/issues/5618ãåç §ããŠãã ããã
æ¬çªã¯ã©ã¹ã¿ãŒã§åã³çºçããŸãã
KubernetesïŒ1.11.0
coreosïŒ1520.9.0
dockerïŒ1.12.6
cniïŒãã£ã©ã³
ãŸã ããŒãã§kubeletãšdockerdãåèµ·åããŸããããä»ã¯åé¡ãªãããã§ãã
notreadyããŒããšreadyããŒãã®å¯äžã®éãã¯ãcronjobãããã®éå§ãšåæ¢ããããããããnotreadyããŒãã§åŒ·å¶çµäºãããããšã§ãã
@mechpen
åãåé¡ãçºçããŠãããã©ããã¯ããããŸããã
Jul 30 17:52:15 cloud-blade-31 kubelet[24734]: I0730 17:52:15.585102 24734 kubelet_node_status.go:431] Recording NodeNotReady event message for node cloud-blade-31
Jul 30 17:52:15 cloud-blade-31 kubelet[24734]: I0730 17:52:15.585137 24734 kubelet_node_status.go:792] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2018-07-30 17:52:15.585076295 -0700 PDT m=+13352844.638760537 LastTransitionTime:2018-07-30 17:52:15.585076295 -0700 PDT m=+13352844.638760537 Reason:KubeletNotReady Message:PLEG is not healthy: pleg was last seen active 3m0.948768335s ago; threshold is 3m0s}
Jul 30 17:52:25 cloud-blade-31 kubelet[24734]: I0730 17:52:25.608101 24734 kubelet_node_status.go:443] Using node IP: "10.11.3.31"
Jul 30 17:52:35 cloud-blade-31 kubelet[24734]: I0730 17:52:35.640422 24734 kubelet_node_status.go:443] Using node IP: "10.11.3.31"
Jul 30 17:52:36 cloud-blade-31 kubelet[24734]: E0730 17:52:36.556409 24734 remote_runtime.go:169] ListPodSandbox with filter nil from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul 30 17:52:36 cloud-blade-31 kubelet[24734]: E0730 17:52:36.556474 24734 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul 30 17:52:36 cloud-blade-31 kubelet[24734]: W0730 17:52:36.556492 24734 image_gc_manager.go:173] [imageGCManager] Failed to monitor images: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul 30 17:52:45 cloud-blade-31 kubelet[24734]: I0730 17:52:45.667169 24734 kubelet_node_status.go:443] Using node IP: "10.11.3.31"
Jul 30 17:52:55 cloud-blade-31 kubelet[24734]: I0730 17:52:55.692889 24734 kubelet_node_status.go:443] Using node IP: "10.11.3.31"
Jul 30 17:53:05 cloud-blade-31 kubelet[24734]: I0730 17:53:05.729182 24734 kubelet_node_status.go:443] Using node IP: "10.11.3.31"
Jul 30 17:53:15 cloud-blade-31 kubelet[24734]: E0730 17:53:15.265668 24734 remote_runtime.go:169] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},} from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
DockerããŒã¢ã³ããã«ã¹ãã§ãã¯ãžã®å¿çãåæ¢ãããšãããŒãã¯NotReady
ãªããŸãã ãã·ã³èªäœã§ã¯docker ps
ããã³ã°ããŸããã docker version
ãæ»ããŸãã ããŒããReady
æ»ãã«ã¯ãdockerããŒã¢ã³ãåèµ·åããå¿
èŠããããŸãã ããããã¹ã¿ãã¯ããŠãããã©ããã¯ããããŸãããã³ã³ãããäžèŠ§è¡šç€ºã§ããªãããã§ãã
KubernetesïŒ1.9.2
Docker 17.03.1-ce commit c6d412e
OSïŒUbuntu 16.04
ã«ãŒãã«ïŒLinux 4.13.0-31-genericïŒ34ã16.04.1-Ubuntu SMP Fri Jan 19 17:11:01 UTC 2018 x86_64 x86_64 x86_64 GNU / Linux
åãåé¡ããããŸãã ããã¯éåžžã«é »ç¹ã«çºçãããããããŒãã¯5åéã®ãããã®ã¹ã±ãžã¥ãŒãªã³ã°ã«èããããŸããã
ãšã©ãŒã¯ãã¡ã€ã³ã¯ã©ã¹ã¿ãŒïŒãã©ã³ãã«ïŒãšãã¹ãã¯ã©ã¹ã¿ãŒïŒã«ãªã³ïŒã®äž¡æ¹ã§çºçããŸãã
kubernetesããŒãžã§ã³ïŒ1.9ãïŒ/ 1.11.1ïŒããã£ã¹ããªãã¥ãŒã·ã§ã³ïŒdebianãubuntuïŒãã¯ã©ãŠããããã€ããŒïŒec2ãhetzner cloudïŒãdockerããŒãžã§ã³ïŒ17.3.2ã17.06.2ïŒãå€ããŠã¿ãŸããã å®å
šãªè¡åããã¹ãããã®ã¯ã1ã€ã®å€æ°ã®ããªãšãŒã·ã§ã³ã ãã§ã¯ãããŸããã§ããã
ç§ã®ã¯ãŒã¯ããŒãã¯éåžžã«åçŽã§ãïŒ1ã€ã®ã³ã³ãããŒãããªã¥ãŒã ãªããããã©ã«ãã®ãããã¯ãŒã¯ã30åã®ãããã®ãã«ã¯ã§ã¹ã±ãžã¥ãŒã«ããããããïŒ
ã¯ã©ã¹ã¿ãŒã¯ãã«ã¹ã¿ãã€ãºããã«kubeadmã䜿çšããŠæ°ãã«ã»ããã¢ãããããŸãïŒãã©ã³ãã«ã䜿çšããæåã®ãã¹ããé€ãïŒ
ãšã©ãŒã¯æ°å以å
ã«çºçããŠããŸãã docker ps
ãæ»ã£ãŠããªã/ã¹ã¿ãã¯ããŠãããããããçµäºããŠã¹ã¿ãã¯ããŠãããªã©ã
çŸåšããã®ãšã©ãŒãåŒãèµ·ãããªãæ¢ç¥ã®æ§æïŒdebianãŸãã¯ubuntuã䜿çšïŒããããã©ããçåã«æã£ãŠããŸããïŒ
å®å®ããããŒããçæãããªãŒããŒã¬ã€ãããã¯ãŒã¯ãšä»ã®ããŒãžã§ã³ã®äœæ¥ã®çµã¿åãããå
±æã§ããããã®ãã°ã®åœ±é¿ãåããŠããªã人ã¯ããŸããïŒ
ããã¯ãBaremetalããŒãã®Openshiftã§çºçããŸãã
ãã®PLEGã®ç¹å®ã®çºçã§ã¯ãå€æ°ã®vCPUãèšå®ãããŠããOpenShiftããŒãã§å€æ°ã®ã³ã³ãããŒãïŒæŽèµ°ããcronãžã§ããä»ããŠïŒäžåºŠã«éå§ããããšãã«åé¡ãçºçããŸããã ããŒãã¯ããŒããããæ倧250ãããã«éããéè² è·ã«ãªããŸããã
解決çã¯ãvCPUã®æ°ã8ã«æžããããšã§OpenShiftããŒãä»®æ³ãã·ã³ã«å²ãåœãŠãããvCPUãæžããããšã§ãïŒããšãã°ïŒãããã¯ãã¹ã±ãžã¥ãŒã«ã§ãããããã®æ倧æ°ã80ãããã«ãªãããšãæå³ããŸãïŒCPUãããã®ããã©ã«ãã®å¶éã¯10ãããïŒ ïŒ250ã®ä»£ããã«ãéåžžããã倧ããªããŒãã§ã¯ãªããããé©åãªãµã€ãºã®ããŒãã䜿çšããããšããå§ãããŸãã
224CPUã®ããŒãããããŸãã KubernetesããŒãžã§ã³1.7.1-Redhat7.4
åæ§ã®åé¡ããããšæããŸãã ç§ã®ãããã¯çµäºãããŸã§ãã³ã°ãããã°ã«äžå¥åº·ãªPLEGã®å ±åããããŸãã ããããç§ã®ç¶æ³ã§ã¯ãæåã§kubeletããã»ã¹ã匷å¶çµäºãããŸã§ãæ£åžžã«æ»ãããšã¯ãããŸããã åçŽãªsudo systemctl restart kubelet
åé¡ã解決ããŸããããããŒã«ã¢ãŠããè¡ããã³ã«ããã·ã³ã®çŽ1/4ã§è§£æ±ºããå¿
èŠããããŸãã ããã¯çŽ æŽãããããšã§ã¯ãããŸããã
ããã§äœãèµ·ãã£ãŠããã®ãæ£ç¢ºã«ã¯ããããŸããããkubeletããã»ã¹ã§bridge
ã³ãã³ããå®è¡ãããŠããã®ãèŠããšããã®ã¹ã¬ããã§åè¿°ããããã«ãCNIã«é¢é£ããŠããã®ã§ããããã ä»æ¥ããã®2ã€ã®å¥ã
ã®ã€ã³ã¹ã¿ã³ã¹ãã倧éã®ãã°ãæ·»ä»ããŸããã誰ããšååããŠããã®åé¡ããããã°ã§ããããšãããããæããŸãã
ãã¡ããããã®åé¡ã®ãããã¹ãŠã®ãã·ã³ã¯ãåŸæ¥ã®unregister_netdevice: waiting for eth0 to become free. Usage count = 2
ãåãåºããŸã-å®è¡äžã®goã«ãŒãã³ãååŸããããã«éä¿¡ãããSIGABRTã䜿çšããŠlogs.tar.gzã«2ã€ã®ç°ãªãkubeletãã°ã ããŸãããã°ãããã圹ç«ã€ã§ãããã é¢é£ããŠããããã«èŠããããã€ãã®é»è©±ãèŠãã®ã§ãããã§ããããåŒã³åºããŸã
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: goroutine 2895825 [semacquire, 17 minutes]:
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: sync.runtime_SemacquireMutex(0xc422082d4c)
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: /usr/local/go/src/runtime/sema.go:62 +0x34
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: sync.(*Mutex).Lock(0xc422082d48)
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: /usr/local/go/src/sync/mutex.go:87 +0x9d
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: k8s.io/kubernetes/pkg/kubelet/network.(*PluginManager).GetPodNetworkStatus(0xc420ddbbc0, 0xc421e36f76, 0x17, 0xc421e36f69, 0xc, 0x36791df, 0x6, 0xc4223f6180, 0x40, 0x0, ...)
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: /workspace/anago-v1.8.7-beta.0.34+b30876a5539f09/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/network/plugins.go:376 +0xe6
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: goroutine 2895819 [syscall, 17 minutes]:
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: syscall.Syscall6(0xf7, 0x1, 0x25d7, 0xc422c96d70, 0x1000004, 0x0, 0x0, 0x7f7dc6909e10, 0x0, 0xc4217e9980)
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: /usr/local/go/src/syscall/asm_linux_amd64.s:44 +0x5
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: os.(*Process).blockUntilWaitable(0xc42216af90, 0xc421328c60, 0xc4217e99e0, 0x1)
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: /usr/local/go/src/os/wait_waitid.go:28 +0xa5
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: os.(*Process).wait(0xc42216af90, 0x411952, 0xc4222554c0, 0xc422255480)
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: /usr/local/go/src/os/exec_unix.go:22 +0x4d
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: os.(*Process).Wait(0xc42216af90, 0x0, 0x0, 0x379bbc8)
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: /usr/local/go/src/os/exec.go:115 +0x2b
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: os/exec.(*Cmd).Wait(0xc421328c60, 0x0, 0x0)
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: /usr/local/go/src/os/exec/exec.go:435 +0x62
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: os/exec.(*Cmd).Run(0xc421328c60, 0xc422255480, 0x0)
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: /usr/local/go/src/os/exec/exec.go:280 +0x5c
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: k8s.io/kubernetes/vendor/github.com/containernetworking/cni/pkg/invoke.(*RawExec).ExecPlugin(0x5208390, 0xc4217e98a0, 0x1b, 0xc4212e66e0, 0x156, 0x160, 0xc422b7fd40, 0xf, 0x12, 0x4121a8, ...)
Aug 13 22:57:30 worker-4bm5 kubelet[1563]: /workspace/anago-v1.8.7-beta.0.34+b30876a5539f09/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/containernetworking/cni/pkg/invoke/raw_exec.go:42 +0x215
ã«ãŒãã«4.14.33以éã§ã³ã³ããæé©åOSãkubenetã§äœ¿çšããGCEäžã®Kubernetes1.8.7ã
@ jcperezamin ã
ç§ã¯ããããã¢ã¡ã¿ã«ã§ååŸããŠããŸãã kubeadmïŒã·ã³ã°ã«ããŒããã¹ã¿ãŒïŒã§æ§æãããUbuntu18.04ã®æ°èŠã€ã³ã¹ããŒã«ã䜿çšããŸãã
Warning ContainerGCFailed 8m (x438 over 8h) ... rpc error: code = ResourceExhausted desc = grpc: trying to send message larger than max (8400302 vs. 8388608)
ã«ééããŸããã ããŒãã¯æ倧11,500åã®åæ¢ããã³ã³ãããŒãèç©ããŠããŸããã äžéšã®ã³ã³ãããæåã§ã¯ãªã¢ããŠGCãä¿®æ£ããŸããããçŽåŸã«PLEGãåå ã§ããŒããNotReadyã«ãªããŸããã
ãããã¯ãŒã¯çšã®ãã©ã³ãã«ãåãããããªãå¿ èŠæäœéââã®k8sæ§æã䜿çšããŠããŸãã 圱é¿ãåããããŒãã¯ãããŒããŠã§ã¢RAID6ã«6x 10kSASãã©ã€ããæèŒããå€ãXeonE5-2670ããŒã¹ã®ãã·ã³ã§ãã
PLEGã®åé¡ã¯1æé以å ã«è§£æ±ºãããkubeletãåèµ·åãããšããã«åé¡ãä¿®æ£ãããŸããã
ãã·ã³ã«å€§ããªè² è·ãããããã³ã«çºçããŠããããã§ãããŒããèªåçã«å埩ããããšã¯ãããŸããã SSHçµç±ã§ãã°ã€ã³ãããšãããŒãã®CPUãšãã®ä»ã®ãªãœãŒã¹ã¯ç©ºã«ãªããŸãã Dockerã³ã³ãããŒãã€ã¡ãŒãžãããªã¥ãŒã ãªã©ã¯ããã»ã©å€ããããŸããããããã®ãªãœãŒã¹ã®äžèŠ§è¡šç€ºã¯é«éã§ãã ãããŠãåã«kubeletãåèšè¿°ãããšãåžžã«åé¡ãå³åº§ã«ä¿®æ£ãããŸãã
ç§ã¯æ¬¡ã®ããŒãžã§ã³ã䜿çšããŠããŸãïŒ
Kubernetes1.11.1ã®ãã¢ã¡ã¿ã«ããŒãã§ãã®åé¡ãçºçããŸãã:(
ãããé »ç¹ã«çµéšããããŒãã¯éåžžã«åŒ·åã§ååã«æŽ»çšãããŠããŸããã
åãåé¡...
ç°å¢ïŒ
ã¯ã©ãŠããããã€ããŒãŸãã¯ããŒããŠã§ã¢æ§æïŒãã¢ã¡ã¿ã«
OSïŒäŸïŒ/ etc / os-releaseããïŒïŒUbuntu 16.04
ã«ãŒãã«ïŒäŸïŒuname -aïŒïŒ4.4.0-109-generic
KubernetesïŒ1.10.5
DockerïŒ1.12.3-0ãxenial
kubernetes 1.10.3ã«ç§»è¡ããåŸããåãåé¡ãçºçããŸãã
ã¯ã©ã€ã¢ã³ãããŒãžã§ã³ïŒversion.Info {ã¡ãžã£ãŒïŒ "1"ããã€ããŒïŒ "10"ãGitVersionïŒ "v1.10.5"
ãµãŒããŒããŒãžã§ã³ïŒversion.Info {ã¡ãžã£ãŒïŒ "1"ããã€ããŒïŒ "10"ãGitVersionïŒ "v1.10.3"
ãã¢ã¡ã¿ã«ç°å¢ã§ã®åãåé¡ïŒ
ç°å¢ïŒ
ã¯ã©ãŠããããã€ããŒãŸãã¯ããŒããŠã§ã¢æ§æïŒãã¢ã¡ã¿ã«
OSïŒäŸïŒ/ etc / os-releaseããïŒïŒCoreOS 1688.5.3
ã«ãŒãã«ïŒäŸïŒuname -aïŒïŒ4.14.32
KubernetesïŒ1.10.4
DockerïŒ17.12.1
åé¡ã®å°çæã«ããŒãã®IOWAITå€ãç¥ãããšã¯èå³æ·±ãããšã§ãã
åãåé¡ãå¥ã®ãã¢ã¡ã¿ã«ç°å¢ã§ç¹°ãè¿ãèŠãããŸãã ææ°ã®ãããã®ããŒãžã§ã³ïŒ
åå ã¯ããã£ãŠããŸãã
ããã§ã¢ããã¹ããªãŒã ã®ä¿®æ£ãè¡ãããŠããŸãïŒ
https://github.com/containernetworking/cni/pull/568
次ã®ã¹ãããã¯ã誰ãããžã£ã³ããããå Žåã«kubernetesã䜿çšããcniãæŽæ°ããããšã§ã
ãããŠãã®PRãæºåããŸãã ããªãã¯ãããã@liuciminãŸãã¯ç§ãšèª¿æŽããããšæãã§ããã
ã€ãŸå
ãèžãŸãªãããã«ã
éã«ã¯ã2018幎9æ14æ¥ã11ïŒ38 AMã«ã«ããŒCoalson [email protected]
æžããŸããïŒ
åãåé¡ãå¥ã®ãã¢ã¡ã¿ã«ç°å¢ã§ç¹°ãè¿ãèŠãããŸãã ã®ããŒãžã§ã³
ææ°ã®ãããïŒ
- OSïŒ Ubuntu 16.040.5 LTS
- ã«ãŒãã«ïŒ Linux4.4.0-134-generic
- KubernetesïŒ
- 矜ã°ãããã¹ãïŒ v1.10.3
- ãã¹ã¿ãŒïŒ v1.10.5ããã³v1.10.2
- ãã©ããã³ã°ãã¹ãäžã®DockerïŒ 18.03.1-ceïŒgo1.9.5ã§ã³ã³ãã€ã«ïŒ
â
ããªããã³ã¡ã³ãããã®ã§ããªãã¯ãããåãåã£ãŠããŸãã
ãã®ã¡ãŒã«ã«çŽæ¥è¿ä¿¡ããGitHubã§è¡šç€ºããŠãã ãã
https://github.com/kubernetes/kubernetes/issues/45419#issuecomment-421447751 ã
ãŸãã¯ã¹ã¬ããããã¥ãŒãããŸã
https://github.com/notifications/unsubscribe-auth/AFctXYnTJjNwtWquPmi5nozVMUYDetRlks5ua_eIgaJpZM4NSBta
ã
@deitch
ããã«ã¡ã¯ãç§ã¯ãã®ãããªåããšã©ãŒã«ééããŸããError syncing pod *********** ("pod-name"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
dockerdã§ã³ã³ããæ
å ±ããªã¯ãšã¹ãããŸãããããã®ãªã¯ãšã¹ãããããã¯ãããçµæãè¿ãããŸããã§ããcurl -XGET --unix-socket /var/run/docker.sock http://localhost/containers/******("yourcontainerid")/json
ã ããå€åããã¯Dockerãšã©ãŒã ãšæããŸã
ããã¯ããã°ããã£ã¹ã¯ã«æ°žç¶åããéã®DockerããŒã¢ã³ã®ãããã¯ã«é¢ãããã®ã§ãã
dockerã§ããã«å¯ŸåŠããäœæ¥ããããŸããã18.06ãŸã§çéžããŸããïŒk8sã®äœ¿çšã«ã€ããŠã¯ãŸã æ€èšŒæžã¿ã®dockerã§ã¯ãããŸããïŒ
https://docs.docker.com/config/containers/logging/configure/#configure -the-delivery-mode-of-log-messages-from-container-to-log-driver
dockerããŒã¢ã³ã¯ããã©ã«ãã§ãã®ã³ã°ããããã¯ããããããã®åé¡ãåé¿ã§ããããã«ãªããŸã§å¯ŸåŠã§ããŸããã
ããã¯ãåé¡ãçºçããŠãããšãã«iowaitãé«ããªãããšãšãçžé¢ããŠããŸãã
execãã«ã¹ãã§ãã¯ã䜿çšããã³ã³ããã¯ã倧éã®ãã°ãçæããŸãã ãã®ã³ã°ã¡ã«ããºã ã匷調ããä»ã®ãã¿ãŒã³ããããŸãã
ã¡ããã©ç§ã®2c
ãããå®è¡ããŠãããã·ã³ã§é«ãiowaitãçºçããããšã¯ãããŸããã ïŒCoreOSãKube 1.10ãDocker 17.03ïŒ
@mauilionãã®ã³ã°ã®åé¡ã説æããåé¡ãŸãã¯MRãæããŠãã ããã
åãåé¡ãçºçãã2ã€ã®KubernetesããŒããReadyãšNotReadyã®éã§ãã©ããããŠããŸããã ä¿¡ããããªããããããŸãããã解決çã¯ãçµäºããDockerã³ã³ãããšé¢é£ããããããåé€ããããšã§ããã
d4e5d7ef1b5c gcr.io/google_containers/pause-amd64:3.0 Exited (137) 3 days ago
ãã®åŸãä»ã®ä»å
¥ãªãã«ãã¯ã©ã¹ã¿ãŒã¯åã³å®å®ããŸããã
ããã«ãããã¯syslogã§èŠã€ãã£ããã°ã¡ãã»ãŒãžã§ããã
E1015 07:48:49.386113 1323 remote_runtime.go:115] StopPodSandbox "d4e5d7ef1b5c3d13a4e537abbc7c4324e735d455969f7563287bcfc3f97b
085f" from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
ä»ãã®åé¡ã«çŽé¢ããŠããïŒ
OS: Oracle Linux 7.5
Kernel: 4.17.5-1.el7.elrepo.x86_64
Kubernetes: 1.11.3
Flapping host: v1.11.3
Docker on flapping host: 18.03.1-ce (compiled with go1.9.5)
https://github.com/containernetworking/cni/pull/568ãCNIã«çµ±åãããŸããã
äžèšã®ä¿®æ£ãå«ãCNIã®æ°ãããªãªãŒã¹ã§ããIIUCã¯ãk8sã§ãããä¿®æ£ã§ããã¯ãã§ãã
調æŽãå¿ èŠã§ã- @ bboreham @ liucimin ã sig-networkãžã®æçš¿ã
ã©ã®ããŒãžã§ã³ã®kubernetes-cniã«ä¿®æ£ãå«ãŸããŸããïŒ ããããšãïŒ
ã¿ã€ã ã¢ãŠãã«é¢ããããçŠç¹ãçµã£ãåé¡ã¯ïŒ65743ã§ã
ããã§è¿°ã¹ãããã«ã次ã®ã¹ãããã¯KubernetesåŽã§ããã¹ããäœæãããªã©ããŠãå€æŽã«ãã£ãŠåé¡ãå®éã«ä¿®æ£ãããããšã確èªããŸãã ããã確èªããããã«ãªãªãŒã¹ã¯å¿ èŠãããŸãããææ°ã®libCNIã³ãŒãããã«ããã ãã§ãã
/ sigãããã¯ãŒã¯
ãããšã¹ã¿ãã¯ããdocker ps
ããä¿èšŒããããããã«ãã£ãŠããªã¬ãŒãããOOMã«é¢é£ããŠçºçããŠããå Žåã¯ãïŒ72294ãåç
§ããŠãã ããã ãããã€ã³ãã©ã³ã³ããã匷å¶çµäºãããŠåèµ·åããããšãcniã®ååæåãããªã¬ãŒããã次ã«äžèšã®ã¿ã€ã ã¢ãŠã/ããã¯ã®åé¡ãããªã¬ãŒãããå ŽåããããŸãã
ããã«äŒŒããã®ãèŠãããŸã-Ready / NotReadyéã§åžžã«PLEGããã¿ãã¿ããŠããŸã-kubeletãåèµ·åãããšåé¡ã解決ããããã§ãã kubeletããã®ãŽã«ãŒãã³ãã³ãã§ãå€æ°ããããšã«æ°ã¥ããŸããïŒçŸåšã15000ãè¶ ãããŽã«ãŒãã³ã次ã®ã¹ã¿ãã¯ã«ã¹ã¿ãã¯ããŠããŸãïŒ
goroutine 29624527 [semacquire, 2766 minutes]:
sync.runtime_SemacquireMutex(0xc428facb3c, 0xc4216cca00)
/usr/local/go/src/runtime/sema.go:71 +0x3d
sync.(*Mutex).Lock(0xc428facb38)
/usr/local/go/src/sync/mutex.go:134 +0xee
k8s.io/kubernetes/pkg/kubelet/network.(*PluginManager).GetPodNetworkStatus(0xc420820980, 0xc429076242, 0xc, 0xc429076209, 0x38, 0x4dcdd86, 0x6, 0xc4297fa040, 0x40, 0x0, ...)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/network/plugins.go:395 +0x13d
k8s.io/kubernetes/pkg/kubelet/dockershim.(*dockerService).getIPFromPlugin(0xc4217c4500, 0xc429e21050, 0x40, 0xed3bf0000, 0x1af5b22d, 0xed3bf0bc6)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/dockershim/docker_sandbox.go:304 +0x1c6
k8s.io/kubernetes/pkg/kubelet/dockershim.(*dockerService).getIP(0xc4217c4500, 0xc4240d9dc0, 0x40, 0xc429e21050, 0xe55ef53, 0xed3bf0bc7)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/dockershim/docker_sandbox.go:333 +0xc4
k8s.io/kubernetes/pkg/kubelet/dockershim.(*dockerService).PodSandboxStatus(0xc4217c4500, 0xb38ad20, 0xc429e20ed0, 0xc4216214c0, 0xc4217c4500, 0x1, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/dockershim/docker_sandbox.go:398 +0x291
k8s.io/kubernetes/pkg/kubelet/apis/cri/runtime/v1alpha2._RuntimeService_PodSandboxStatus_Handler(0x4d789e0, 0xc4217c4500, 0xb38ad20, 0xc429e20ed0, 0xc425afaf00, 0x0, 0x0, 0x0, 0x0, 0x2)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/apis/cri/runtime/v1alpha2/api.pb.go:4146 +0x276
k8s.io/kubernetes/vendor/google.golang.org/grpc.(*Server).processUnaryRPC(0xc420294640, 0xb399760, 0xc421940000, 0xc4264d8900, 0xc420d894d0, 0xb335000, 0x0, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/server.go:843 +0xab4
k8s.io/kubernetes/vendor/google.golang.org/grpc.(*Server).handleStream(0xc420294640, 0xb399760, 0xc421940000, 0xc4264d8900, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/server.go:1040 +0x1528
k8s.io/kubernetes/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc42191c020, 0xc420294640, 0xb399760, 0xc421940000, 0xc4264d8900)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/server.go:589 +0x9f
created by k8s.io/kubernetes/vendor/google.golang.org/grpc.(*Server).serveStreams.func1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/server.go:587 +0xa1
ãã®ã¹ã¿ãã¯ã«ã¹ã¿ãã¯ããŠãããŽã«ãŒãã³ã®æ°ãæéã®çµéãšãšãã«çå®ã«å¢å ããŠããããšã«æ°ä»ããŸããïŒ2åããšã«çŽ1ã€äœåã«ïŒ
ãããçºçããããŒãã§ã¯ãéåžžãããããTerminating
ã¹ã¿ãã¯ããŠããŸããkubeletãåèµ·åãããšã Terminating
ãããã移åããPLEGã®åé¡ãçºçããªããªããŸãã
@pnovotnakããããCNIã«ã¿ã€ã ã¢ãŠããè¿œå ããããã®å€æŽããŸãã¯ä»ã®äœãã«ãã£ãŠä¿®æ£ãããã¹ãåãåé¡ã®ããã«èãããå Žåãäœãã¢ã€ãã¢ã¯ãããŸããïŒ ãããã¯ãŒã¯åéã§ãåæ§ã®çç¶ã®ããã§ãã
åã質åããããŸãïŒ https ïŒ
ã©ã®ããŒãžã§ã³ã®kubernetes-cniã«ä¿®æ£ãå«ãŸããŸããïŒ ããããšãïŒ
@ warmchangkubernetes-cniãã©ã°ã€ã³ããã±ãŒãžã¯é¢ä¿ãããŸããã å¿ èŠãªå€æŽã¯libcniã«ãããããã¯https://github.com/containernetworking/cniãããã³ããŒåãããŠããŸãïŒãã®ãªããžããªã«ã³ããŒãããŠã
å€æŽã¯ããŒãžãããŸãã ãªãªãŒã¹ã¯å¿ èŠãããŸããïŒãã ããæ°åãè¯ããªãå¯èœæ§ããããŸãïŒã
@bborehamè¿ä¿¡ããããšãããããŸãã
CNIãã©ã°ã€ã³ïŒflannel / calicoãªã©ïŒã§ã¯ãªãããã³ããŒãã£ã¬ã¯ããªå
ã®CNIã³ãŒãïŒlibcniïŒãæå³ããŸãã
ãããŠãæ¿èªãåŸ
ã£ãŠãããã®PRhttps ïŒ//github.com/kubernetes/kubernetes/pull/71653ãèŠã€ããŸããã
/ milestone v1.14
ç§ã¯ãã®åé¡ã«ééããŸããç§ã®ç°å¢ïŒ
docker 18.06
osïŒcentos7.4
ã«ãŒãã«ïŒ4.19
kubeletïŒ1.12.3
ç§ã®ããŒãã¯ReadyãšNotReadyã®éã§ãã¿ãã¿ããŠããŸããã
ãããŸã§ã«ã--- force --grace-period = 0ã®ããããããã€ãåé€ããŸããã ããããåé€ããŠããçµäºã¹ããŒã¿ã¹ã®ãŸãŸã§ããããã§ãã
ãã®åŸãkubeletã§ããã€ãã®ãã°ãèŠã€ããŸããïŒ
kubelet [10937]ïŒI0306 19ïŒ23ïŒ32.474487 10937 handlers.goïŒ62]ããã "saas-56bd6d8588-ã®ã³ã³ãããŒ" odp-saas "ã®å®è¡ã©ã€ããµã€ã¯ã«ããã¯ïŒ[/home/work/webserver/loadnginx.sh stop]ïŒ xlknhïŒ15ebc67d-3bed-11e9-ba81-246e96352590ïŒ "倱æ-ãšã©ãŒïŒã³ãã³ã '/home/work/webserver/loadnginx.sh stop'ã126ã§çµäºããŸããïŒãã¡ãã»ãŒãžïŒ"ãŠãŒã¶ãŒã®äœæ¥ãèŠã€ãããŸããïŒpasswdã«äžèŽãããšã³ããªããããŸãããã¡ã€ã«\ r \ n "_
ãããã€ã¡ã³ãã§lifecyclesectionprestopã³ãã³ãã䜿çšããŠããããã§ãã
ïŒã©ã€ããµã€ã¯ã«ïŒ
ïŒpreStopïŒ
ïŒexecïŒ
ïŒ# SIGTERMã¯ã¯ã€ãã¯çµäºãããªã¬ãŒããŸãã 代ããã«æ£åžžã«çµäºããŸã
ïŒã³ãã³ãïŒ["/ home / work / webserver / loadnginx.sh"ã "stop"]
ããã³ãã®ä»ã®ãã°ã¯æ¬¡ã®ããšã瀺ããŠããŸãã
kubelet [17119]ïŒE0306 19ïŒ35ïŒ11.223925 17119 remote_runtime.goïŒ282] ContainerStatus "cbc957993825885269935a343e899b807ea9a49cb9c7f94e68240846af3e701d" from runti
kubelet [17119]ïŒE0306 19ïŒ35ïŒ11.223970 17119 kuberuntime_container.goïŒ393] cbc957993825885269935a343e899b807ea9a49cb9c7f94e68240846af3e701dã®ContainerStatus
kubelet [17119]ïŒE0306 19ïŒ35ïŒ11.223978 17119 kuberuntime_manager.goïŒ866]ããããgz-saas-56bd6d8588-sk88t_storeicïŒ1303430e-3ffa-11e9-ba8ãã®getPodContainerStatuses
kubelet [17119]ïŒE0306 19ïŒ35ïŒ11.223994 17119 generic.goïŒ241] PLEGïŒãããsaasã®ã€ãã³ããç¡èŠããŸã-56bd6d8588-sk88t / storeicïŒrpcãšã©ãŒïŒã³ãŒã= DeadlineExceeded d
kubelet [17119]ïŒE0306 19ïŒ35ïŒ11.224123 17119 pod_workers.goïŒ186]ãããã®åæäžã«ãšã©ãŒãçºçããŸãã1303430e-3ffa-11e9-ba81-246e96352590ïŒ "gz-saas-56bd6d8588-sk88t_storeicïŒ130343
Mkubelet [17119]ïŒE0306 19ïŒ35ïŒ12.509163 17119 remote_runtime.goïŒ282] ContainerStatus "4ff7ff8e1eb18ede5eecbb03b60bdb0fd7f7831d8d7e81f59bc69d166d422fb6" from runti
kubelet [17119]ïŒE0306 19ïŒ35ïŒ12.509163 17119 remote_runtime.goïŒ282] ContainerStatus "cbc957993825885269935a343e899b807ea9a49cb9c7f94e68240846af3e701d" from runti
kubelet [17119]ïŒE0306 19ïŒ35ïŒ12.509220 17119 kubelet_pods.goïŒ1086]ããããsaas-56bd6d8588-rsfh5ãã®åŒ·å¶çµäºã«å€±æããŸããïŒãsaasãwiã®ãKillââContainerãã«å€±æããŸãã
kubelet [17119]ïŒE0306 19ïŒ35ïŒ12.509230 17119 kubelet_pods.goïŒ1086]ããããsaas-56bd6d8588-sk88tãã®åŒ·å¶çµäºã«å€±æããŸããïŒãsaasãwiã®ãKillââContainerãã«å€±æããŸãã
kubelet [17119]ïŒI0306 19ïŒ35ïŒ12.788887 17119 kubelet.goïŒ1821]ãããåæã®ã¹ããã-[PLEGã¯æ£åžžã§ã¯ãããŸããïŒplegã¯4å1.597223765ç§åã«ã¢ã¯ãã£ãã«æåŸã«èŠãããŸããã
k8sã¯ã³ã³ãããåæ¢ã§ããŸããããããã®ã³ã³ããã¯ã¹ã¿ãã¯ç¶æ
ã«ãªããŸããã ããã«ãããPLEGãå¥åº·ã«ãªããŸããã
æåŸã«ããšã©ãŒã³ã³ãããããdockerããŒã¢ã³ãåèµ·åãããšãããŒãã¯æºåå®äºã«å埩ããŸãã
ãªãã³ã³ãããæ¢ãŸããªãã®ãããããŸãã!!! ãã¬ã¹ããããããããŸãããïŒ
/ milestone v1.15
+1
k8s v1.10.5
docker 17.09.0-ce
+1
k8s v1.12.3
docker 06.18.2-ce
+1
k8s v1.13.4
docker-1.13.1-94.gitb2f74b2.el7.x86_64
@ kubernetes / sig-network-bugs @thockin @spiffxp ïŒãã¬ã³ããªãŒãªpingã ããã¯åã³è¡ãè©°ãŸã£ãããã§ãã
@calder ïŒéç¥ãããªã¬ãŒããããã«èšåãç¹°ãè¿ããŸãïŒ
@ kubernetes / sig-network-bugs
察å¿ããŠããã®ïŒ
@ kubernetes / sig-network-bugs @thockin @spiffxp ïŒãã¬ã³ããªãŒãªpingã ããã¯åã³è¡ãè©°ãŸã£ãããã§ãã
PRã³ã¡ã³ãã䜿çšããŠç§ãšããåãããããã®æé ã¯ããã¡ãããå
¥æã§ãkubernetes / test-infraãªããžããªã«å¯ŸããŠåé¡ã
ããã«ã¡ã¯ã
ãã®åé¡ã¯ããã©ãããã©ãŒã ã®1ã€ã§ãèŠã€ãããŸããã ä»ã®ã¯ã©ã¹ã¿ãŒãšã®å¯äžã®éãã¯ããã¹ã¿ãŒããŒãã1ã€ãããªãããšã§ãã å®éã3ã€ã®ãã¹ã¿ãŒã䜿çšããŠã¯ã©ã¹ã¿ãŒãåäœæããŸãããããããŸã§ã®ãšããïŒæ°æ¥åŸïŒåé¡ã«æ°ã¥ããŠããŸããã
ã ããç§ã®è³ªåã¯ïŒãã«ããã¹ã¿ãŒïŒ> = 3ïŒã¯ã©ã¹ã¿ãŒã§ãã®åé¡ã«æ°ã¥ãã人ã¯ããŸããïŒ
@Kanshiroronã¯ãã3ã€ã®ãã¹ã¿ãŒã¯ã©ã¹ã¿ãŒããããæšæ¥1ã€ã®ã¯ãŒã«ãŒããŒãã§ãã®åé¡ãçºçããŸããã ããŒãããã¬ã€ã³ããŠåèµ·åãããšãæ£åžžã«æ»ã£ãŠããŸããã ãã©ãããã©ãŒã ã¯ãk8sv1.11.8ããã³DockerEnterprise18.09.2-eeãæèŒããDockerEEã§ãã
3ãã¹ã¿ãŒã¯ã©ã¹ã¿ãŒïŒ3ããŒãetcdã¯ã©ã¹ã¿ãŒïŒããããŸãã 18åã®ã¯ãŒã«ãŒããŒãããããåããŒãã¯å¹³åããŠ50ã100åã®Dockerã³ã³ãããŒïŒãããã§ã¯ãªããã³ã³ãããŒå šäœïŒã§å®è¡ãããŠããŸãã
ããªãã®ãããã¹ãã³ã¢ããã€ãã³ããšãPLEGãšã©ãŒã®ããã«ããŒããåèµ·åããå¿ èŠãããããšãšã®éã«æ確ãªæ£ã®çžé¢é¢ä¿ãèŠãããŸãã å Žåã«ãã£ãŠã¯ãã¹ãã³ã¢ããã«ãã£ãŠã€ã³ãã©ã¹ãã©ã¯ãã£å šäœã§100ãè¶ ããã³ã³ãããäœæãããããšããããŸãããããçºçãããšãã»ãšãã©ã®å ŽåãçµæãšããŠPLEGãšã©ãŒãçºçããŸãã
ããŒããŸãã¯ã¯ã©ã¹ã¿ãŒã¬ãã«ã§ããããåŒãèµ·ãããŠããåå ãç解ããŠããŸããïŒ
ç§ã¯ããããå°ãé¢ããŠããŸã-äœãèµ·ãã£ãŠããã®ãç¥ã£ãŠããŸããïŒ @bborehamã®ä¿®æ£ã¯ãããŸããïŒäœãèµ·ããŠãããç¥ã£ãŠããããã ã£ãã®ã§ïŒïŒ PRã¯ãããŸããïŒ
ãã®çç¶ã¯ããŸããŸãªåå ã§çºçããå¯èœæ§ããããšæãããŸãããããã§ã®ãåãåé¡ããããŸãããšããã³ã¡ã³ãã®ã»ãšãã©ã«ã€ããŠã¯ãããŸãç¶ããå¿ èŠã¯ãããŸããã
ãããã®æ¹æ³ã®äžã€ã¯ã§è©³çŽ°ã«èª¬æãããhttps://github.com/kubernetes/kubernetes/issues/45419#issuecommentã§-405168344ãšåæ§ã®https://github.com/kubernetes/kubernetes/issues/45419#issuecomment -456081337 -é話ãCNIã«å ¥ããšãKubeletãå£ããŠãæ°žé ã«ãã³ã°ããå¯èœæ§ããããŸãã åé¡ïŒ65743ã¯ãã¿ã€ã ã¢ãŠããè¿œå ããå¿ èŠããããšè¿°ã¹ãŠããŸãã
ããã«å¯ŸåŠããããã«ã Context
ãlibcniã«æ¿å
¥ããŠããã£ã³ã»ã«ãexec.CommandContext()
ã§å®è£
ã§ããããã«ããããšã«ããŸããã PRïŒ71653ã¯ããã®APIã®CRIåŽã«ã¿ã€ã ã¢ãŠããè¿œå ããŸãã
ïŒããããããããããã«ãCNIãã©ã°ã€ã³ãžã®å€æŽã¯å«ãŸããŠããŸãããããã¯ããã©ã°ã€ã³ãå®è¡ããã³ãŒããžã®å€æŽã§ãïŒ
ããŠãPLEGã¹ãŠã©ãŒã ïŒæè¿ãããåŒãã§ããŸãïŒã§ãããã°ãè¡ãæ©äŒãåŸãŸãããK8sã«ãã£ãŠå ±åãããPLEGãšã©ãŒãšDocker.serviceãã°ã®ãšã³ããªãšã®éã«ããã€ãã®çžé¢é¢ä¿ãèŠã€ãããŸããã
2ã€ã®ãµãŒããŒã§ãç§ã¯ãããèŠã€ããŸããïŒ
ãšã©ãŒãç£èŠããŠããã¹ã¯ãªããããïŒ
Sat May 11 03:27:19 PDT 2019 - SERVER-A
Found: Ready False Sat, 11 May 2019 03:27:10 -0700 Sat, 11 May 2019 03:13:16 -0700 KubeletNotReady PLEG is not healthy: pleg was last seen active 16m53.660513472s ago; threshold is 3m0s
'journalctl -u docker.service'ããã®SERVER-Aã®åºåã§äžèŽãããšã³ããªïŒ
May 11 03:10:20 SERVER-A dockerd[1133]: time="2019-05-11T03:10:20.641064617-07:00" level=error msg="stream copy error: reading from a closed fifo"
May 11 03:10:20 SERVER-A dockerd[1133]: time="2019-05-11T03:10:20.641083454-07:00" level=error msg="stream copy error: reading from a closed fifo"
May 11 03:10:20 SERVER-A dockerd[1133]: time="2019-05-11T03:10:20.740845910-07:00" level=error msg="Error running exec a9fe257c0fca6ff3bb05a7582015406e2f7f6a7db534b76ef1b87d297fb3dcb9 in container: OCI runtime exec failed: exec failed: container_linux.go:344: starting container process caused \"process_linux.go:113: writing config to pipe caused \\\"write init-p: broken pipe\\\"\": unknown"
May 11 03:10:20 SERVER-A dockerd[1133]: time="2019-05-11T03:10:20.767528843-07:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
27 lines of this^^ repeated
次ã«ãç§ã®ã¹ã¯ãªãããšã¯å¥ã®ãµãŒããŒã§ïŒ
Sat May 11 03:38:25 PDT 2019 - SERVER-B
Found: Ready False Sat, 11 May 2019 03:38:16 -0700 Sat, 11 May 2019 03:38:16 -0700 KubeletNotReady PLEG is not healthy: pleg was last seen active 3m6.168050703s ago; threshold is 3m0s
ããã³Dockerãžã£ãŒãã«ããïŒ
May 11 03:35:25 SERVER-B dockerd[1102]: time="2019-05-11T03:35:25.745124988-07:00" level=error msg="stream copy error: reading from a closed fifo"
May 11 03:35:25 SERVER-B dockerd[1102]: time="2019-05-11T03:35:25.745139806-07:00" level=error msg="stream copy error: reading from a closed fifo"
May 11 03:35:25 SERVER-B dockerd[1102]: time="2019-05-11T03:35:25.803182460-07:00" level=error msg="1a5dbb24b27cd516373473d34717edccc095e712238717ef051ce65022e10258 cleanup: failed to delete container from containerd: no such container"
May 11 03:35:25 SERVER-B dockerd[1102]: time="2019-05-11T03:35:25.803267414-07:00" level=error msg="Handler for POST /v1.38/containers/1a5dbb24b27cd516373473d34717edccc095e712238717ef051ce65022e10258/start returned error: OCI runtime create failed: container_linux.go:344: starting container process caused \"process_linux.go:297: getting the final child's pid from pipe caused \\\"EOF\\\"\": unknown"
May 11 03:35:25 SERVER-B dockerd[1102]: time="2019-05-11T03:35:25.876522066-07:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 11 03:35:25 SERVER-B dockerd[1102]: time="2019-05-11T03:35:25.964447832-07:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
æ®å¿µãªãããããããæ£åžžãªãããŒãå
šäœã§æ€èšŒãããšããããã®ã€ã³ã¹ã¿ã³ã¹ãäžç·ã«çºçããŠããããšãããããŸãã
ãããä»ã®å€æ°ãšçžé¢ãããããã«åªããŸããããããã®ãšã©ãŒã¡ãã»ãŒãžãæ€çŽ¢ãããšãããã€ãã®èå³æ·±ãè°è«ã«ã€ãªãããŸãã
KubernetesïŒããŒããããã®æ倧ãããæ°ãå¢ããïŒ23349
ãã®æåŸã®ãªã³ã¯ã«ã¯ã @ dElogicsã«ããç¹ã«èå³æ·±ãã³ã¡ã³ãããã
ããã€ãã®è²Žéãªæ å ±ãè¿œå ããã ãã§ãããŒãããšã«å€æ°ã®ããããå®è¡ãããšãïŒ45419ã«ãªããŸãã ä¿®æ£ãšããŠãdockerãã£ã¬ã¯ããªãåé€ããdockerãškubeletãäžç·ã«åèµ·åããŸãã
ç§ã®å ŽåãK8sv1.10.2ãšdocker-cev18.03.1ã䜿çšããŠããŸãã 次ã®ãããªããŒããã©ããã³ã°Ready / NotReadyã§å®è¡ãããŠããkubeletã®ãã°ãããã€ãèŠã€ããŸããã
E0512 09:17:56.721343 4065 pod_workers.go:186] Error syncing pod e5b8f48a-72c2-11e9-b8bf-005056871a33 ("uac-ddfb6d878-f6ph2_default(e5b8f48a-72c2-11e9-b8bf-005056871a33)"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
E0512 09:17:17.154676 4065 kuberuntime_manager.go:859] PodSandboxStatus of sandbox "a34943dabe556924a2793f1be2f7181aede3621e2c61daef0838cf3fc61b7d1b" for pod "uac-ddfb6d878-f6ph2_default(e5b8f48a-72c2-11e9-b8bf-005056871a33)" error: rpc error: code = DeadlineExceeded desc = context deadline exceeded
ãããŠããã®ãããuac-ddfb6d878-f6ph2_default
ãçµäºããŠããããšãããã£ãã®ã§ãåé¿çã¯ããããã匷å¶çã«åé€ãããã®ããŒãäžã®ãã®ãããã®ãã¹ãŠã®ã³ã³ãããŒãåé€ããããšã§ãããã®åŸããã®ããŒãã¯æ£åžžã«æ©èœããŸãã
$ kubectl delete pod uac-ddfb6d878-f6ph2 --force --grace-period=0
$ docker ps -a | grep uac-ddfb6d878-f6ph2_default
ããã«ã¡ã¯ïŒ 1.15ã®BugFreezeãéå§ããŸããã ãã®åé¡ã¯ãŸã 1.15ã«çµã¿èŸŒãŸããäºå®ã§ããïŒ
ããã«ã¡ã¯
OKDã¯ã©ã¹ã¿ãŒã§ãåãåé¡ãçºçããŠããŸããã
矜ã°ããããŠããããŒãã調æ»ããå°ãæãäžããåŸãåé¡ã§ãããšæããããã®ãèŠã€ããŸããã
ããŒãã®ãã©ããã³ã°ã調æ»ãããšããããã©ããã³ã°ããŠããããŒãã®å¹³åè² è·å€ãç°åžžã«é«ããããŒãã®1ã€ïŒ16ã³ã¢ã32ã¹ã¬ããã96GBã¡ã¢ãªïŒã®å¹³åè² è·å€ã¯ãããŒã¯æã«850ã§ããã
3ã€ã®ããŒãã§RookCephãå®è¡ãããŠããŸãã
PrometheusãRookCephã®ãããã¯ã¹ãã¬ãŒãžã䜿çšããŠããŠããããã¯ããã€ã¹ãèªã¿åã/æžã蟌ã¿ã§æº¢ããããŠããããšãçºèŠããŸããã
åæã«ãElasticSearchã¯RookCephã®ãããã¯ã¹ãã¬ãŒãžã䜿çšããŠããŸããã Prometheusããããã¯ããã€ã¹ããã©ããã£ã³ã°ããŠããéãElasticSearchããã»ã¹ã¯ãã£ã¹ã¯I / Oæäœãå®è¡ããããšããI / Oæäœãçµäºããã®ãåŸ
ã£ãŠããéã¯äžæã§ããªãç¶æ
ã«ãªãããšãããããŸããã
次ã«ãå¥ã®ESããã»ã¹ãåãããšãè©Šã¿ãŸãã
ãã®åŸãå¥ã®ã
ãããŠããäžã€ã
ããŒãã®CPUå šäœãESããã»ã¹çšã«äºçŽãããã¹ã¬ãããæã¡ãCephãããã¯ããã€ã¹ãPrometheusãã©ããã£ã³ã°ãã解æŸãããã®ãåŸ ã£ãŠããäžæã§ããªãç¶æ ã«ãªããŸãã
CPUã®è² è·ã100ïŒ ã§ãªãã£ããšããŠããã¹ã¬ããã¯äºçŽãããŠããŸããã
ããã«ãããä»ã®ãã¹ãŠã®ããã»ã¹ãCPUæéãåŸ æ©ããDockeræäœã倱æããPLEGãã¿ã€ã ã¢ãŠãããããŒãããã©ããã³ã°ãéå§ããŸããã
ç§ãã¡ã®è§£æ±ºçã¯ãåé¡ã®ããPrometheusããããåèµ·åããããšã§ããã
OKD / K8sããŒãžã§ã³ïŒ
$ oc version
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://okd.example.net:8443
openshift v3.11.0+d0f1080-153
kubernetes v1.11.0+d4cacc0
ããŒãäžã®DockerããŒãžã§ã³ïŒ
$ docker version
Client:
Version: 1.13.1
API version: 1.26
Package version: docker-1.13.1-88.git07f3374.el7.centos.x86_64
Go version: go1.9.4
Git commit: 07f3374/1.13.1
Built: Fri Dec 7 16:13:51 2018
OS/Arch: linux/amd64
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: docker-1.13.1-88.git07f3374.el7.centos.x86_64
Go version: go1.9.4
Git commit: 07f3374/1.13.1
Built: Fri Dec 7 16:13:51 2018
OS/Arch: linux/amd64
Experimental: false
ç·šéïŒ
èŠçŽãããšãããã¯K8s / OKDã®åé¡ã§ã¯ãªããšæããŸããããã¯ããããŒãäžã®äžéšã®ãªãœãŒã¹ããCPUæéãåŸ
ã£ãŠããã»ã¹ãç©ã¿äžãããã¹ãŠãå£ããŠããäœãã«ãã£ãŠããã¯ãããŠãããåé¡ã ãšæããŸãã
/ milestone v1.16
@bboreham @soggiestããã«ã¡ã¯ïŒ ç§ã¯1.16ãªãªãŒã¹ãµã€ã¯ã«ã®ãã°ããªã¢ãŒãžã·ã£ããŠã§ããããã®åé¡ã¯1.16ã®ã¿ã°ãä»ããããŠããããé·æéæŽæ°ãããŠããªãããšãèæ ®ããŠããã®ã¹ããŒã¿ã¹ã確èªããããšæããŸãã ã³ãŒãã®ããªãŒãºã¯8æ29æ¥ïŒä»ããçŽ1.5é±éåŸïŒã«å§ãŸããŸããã€ãŸãããããŸã§ã¯PRã®æºåãã§ããŠããïŒãããŠããŒãžãããŠããïŒã¯ãã§ãã
ãã®åé¡ã¯1.16ã§ä¿®æ£ãããäºå®ã§ããïŒ
@makoscafee 1.13.6ïŒããã³ãã以éã®ããŒãžã§ã³ïŒããã³docker18.06.3-ceã§ã¯ãããçºçããªããªã£ãããšã確èªã§ããŸã
ç§ãã¡ã«ãšã£ãŠãããã¯CNIãŸãã¯å€éšçµ±åãåŒã³åºãéã®ã¿ã€ã ã¢ãŠãã«äœããã®åœ¢ã§é¢é£ããŠããããã§ãã
æè¿ããã«çŽé¢ããŸããããä»ã®ã·ããªãªã§ã¯ãã¯ã©ã¹ã¿ãŒã§äœ¿çšãããŠããNFSãµãŒããŒã®äžéšãã¯ã©ãã¯ãããïŒãããŠããŒãããã®I / Oå šäœãããªãŒãºããïŒäžæ¹ã§ãkubeletã¯æ°ããã³ã³ãããŒãèµ·åã§ããªãããšã«é¢é£ããPLEGã®åé¡ãåºåãå§ããŸããã I / Oã¿ã€ã ã¢ãŠãã®ã
ãããã£ãŠãããã¯ãCNIãšCRIã䜿çšãããšããããã¯ãŒã¯ã®åé¡ã«é¢é£ããã¯ã©ã¹ã¿ãŒã§ãããåã³èŠãããªãã£ãããããããã解決ãããããšã瀺ããŠããå¯èœæ§ããããŸãã
@makoscafeeåã«
ã³ãŒããèŠããšãã³ã³ããã¹ãããã£ã³ã»ã«ã§ããCNIã®æ°ããåäœã䜿çšããããã«kubeletãæŽæ°ãããŠãããšã¯æããŸããã
ããšãã°ãããã§CNIãåŒã³åºãïŒ https ïŒ
ãã®PRã¯ã¿ã€ã ã¢ãŠããè¿œå ããŸãïŒïŒ71653ããããããã§ãæªè§£æ±ºã§ãã
@rikatzã®ãšã¯ã¹ããªãšã³ã¹ãåŒãèµ·ããããã«
確ãã«ããã以æ¥ãç§ã¯Calicoã§å€ãã®ã¢ããã°ã¬ãŒããè¡ã£ãŠããŸããããããããããã§ïŒKubernetesã³ãŒãã§ã¯ãªãïŒäœããå€æŽãããŸããã ãŸããDockerïŒåœæãåé¡ã«ãªãå¯èœæ§ããããŸãïŒã¯äœåºŠãã¢ããã°ã¬ãŒãããããããããããã©ãæ£ããéã¯ãããŸãã
ããã§ãåé¡ãçºçãããšãã«ã¡ã¢ããšããªãããšïŒããã«ã€ããŠã¯ç³ãèš³ãããŸããïŒã«ãå°ãªããšãããããä»æ¥ã®åé¡ã«äœãå€ãã£ãã®ããäŒããã®ã¯æ¥ããããããšã§ãã
ããã«ã¡ã¯ãã¿ããªã
ãã®ãšã©ãŒã«é¢ããç§ãã¡ã®çµéšãå
±æãããã£ãã ãã§ãã
Docker EE19.03.1ããã³k8sv1.14.3ãå®è¡ããŠããæ°ãããããã€ãããã¯ã©ã¹ã¿ãŒã§ãã®ãšã©ãŒãçºçããŸããã
ç§ãã¡ã«ãšã£ãŠããã®åé¡ã¯ãã®ã³ã°ãã©ã€ãã«ãã£ãŠåŒãèµ·ããããããã§ãã Dockerãšã³ãžã³ã¯ãfluentdãã®ã³ã°ãã©ã€ããŒã䜿çšããããã«ã»ããã¢ãããããŠããŸãã ã¯ã©ã¹ã¿ãŒã®æ°èŠãããã€åŸãfluentdã¯ãŸã ãããã€ãããŠããŸããã ãã®æç¹ã§ãã¯ãŒã«ãŒã§ããããã¹ã±ãžã¥ãŒã«ããããšãããšãäžèšãšåãåäœãçºçããŸããïŒã¯ãŒã«ãŒããŒããšã¯ãŒã«ãŒããŒãã®kubeletã³ã³ãããŒã§ã®PLEGãšã©ãŒãã©ã³ãã ã«å ±åãããŸãïŒ
ãã ããfluentdããããã€ããdockerãããã«æ¥ç¶ã§ããããã«ãªããšããã¹ãŠã®åé¡ã解æ¶ãããŸããã ãããã£ãŠãfluentdãšéä¿¡ã§ããªãããšãæ ¹æ¬çãªåå ã®ããã§ãã
ã圹ã«ç«ãŠãã°ã 也æ¯
ããã¯é·å¹Žã®åé¡ïŒk8s 1.6ïŒïŒã§ãããk8sã䜿çšããããªãã®æ°ã®äººã ãæ©ãŸããŠããŸããã
éè² è·ã®ããŒãïŒæ倧CPUïŒ ãioãå²ã蟌ã¿ïŒãšã¯å¥ã«ãPLEGã®åé¡ã¯ãkubeletãdockerãloggingãnetworkingãªã©ã®éã®åŸ®åŠãªåé¡ã«ãã£ãŠåŒãèµ·ããããããšããããåé¡ã®ä¿®æ£ã¯æ®é ·ãªå ŽåããããŸãïŒãã¹ãŠã®ããŒãã®åèµ·åãªã©ãã±ãŒã¹ïŒã
å ã®æçš¿ã«é¢ããéãã httpsïŒ//github.com/kubernetes/kubernetes/pull/71653ãæçµçã«ããŒãžãããkubeletãæŽæ°ãããCNIãªã¯ãšã¹ããã¿ã€ã ã¢ãŠãããŠãæéãè¶ ããåã«ã³ã³ããã¹ãããã£ã³ã»ã«ã§ããããã«ãªããŸããã
Kubernetes1.16ã«ä¿®æ£ãå«ãŸããŸãã
ãŸããPRãéããŠãããã1.14ãš1.15ã«æ»ããŸããããã¯ãæ°ããã¿ã€ã ã¢ãŠãæ©èœïŒ> = 0.7.0ïŒãå«ãCNIããŒãžã§ã³ãããããã§ãã 1.13ã«ã¯ããã®æ©èœã®ãªãå€ãCNIvããããŸãã
ãããã£ãŠãããã¯æçµçã«éããããšãã§ããŸãã
/éãã
@nikopen ïŒãã®åé¡ã解決ããŸãã
察å¿ããŠããã®ïŒ
ããã¯é·å¹Žã®åé¡ïŒk8s 1.6ïŒïŒã§ãããk8sã䜿çšããããªãã®æ°ã®äººã ãæ©ãŸããŠããŸããã
PLEGã®åé¡ãåŒãèµ·ããããŸããŸãªãã®ããããäžè¬çã«kubeletãdockerãloggingãnetworkingãªã©ã®éã§è€éã«ãªããåé¡ã®ä¿®æ£ãæ®é ·ãªå ŽåããããŸãïŒå Žåã«ãã£ãŠã¯ããã¹ãŠã®ããŒããåèµ·åãããªã©ïŒã
å ã®æçš¿ã«é¢ããéãã httpsïŒ//github.com/kubernetes/kubernetes/pull/71653ãæçµçã«ããŒãžãããkubeletãæŽæ°ãããCNIãªã¯ãšã¹ããã¿ã€ã ã¢ãŠãããŠãæéãè¶ ããåã«ã³ã³ããã¹ãããã£ã³ã»ã«ã§ããããã«ãªããŸããã
Kubernetes1.16ã«ä¿®æ£ãå«ãŸããŸãã
ãŸããPRãéããŠãããã1.14ãš1.15ã«æ»ããŸããããã¯ãæ°ããã¿ã€ã ã¢ãŠãæ©èœïŒ> = 0.7.0ïŒãå«ãCNIããŒãžã§ã³ãããããã§ãã 1.13ã«ã¯ããã®æ©èœã®ãªãå€ãCNIvããããŸãããããã£ãŠãããã¯æçµçã«éããããšãã§ããŸãã
/éãã
PRã³ã¡ã³ãã䜿çšããŠç§ãšããåãããããã®æé ã¯ããã¡ãããå
¥æã§ãkubernetes / test-infraãªããžããªã«å¯ŸããŠåé¡ã
å®çšŒåç°å¢ã§ã®1.6
以éã®å人çãªçµéšãããPLEGã®åé¡ã¯éåžžãããŒãã溺ããŠãããšãã«çºçããŸãã
çµæ=> DockerããŒã¢ã³ãå¿çããŸãã
å®çšŒåç°å¢ã§ã®
1.6
以éã®å人çãªçµéšãããPLEGã®åé¡ã¯éåžžãããŒãã溺ããŠãããšãã«çºçããŸãã
- CPUè² è·ãéåžžã«é«ã
- ãã£ã¹ã¯I / Oãæ倧ã«ãªã£ãŠããŸãïŒãã®ã³ã°ïŒïŒ
- ã°ããŒãã«éè² è·ïŒCPU +ãã£ã¹ã¯+ãããã¯ãŒã¯ïŒ=> CPUã¯åžžã«äžæãããŠããŸã
çµæ=> DockerããŒã¢ã³ãå¿çããŸãã
ããã«åæããŸãã 1.14.5
ããŒãžã§ã³ã®Kubernetesã䜿çšããŠããŸãããåãåé¡ããããŸãã
v1.13.10
åãåé¡ãcalicoãããã¯ãŒã¯ã§å®è¡ãããŸãã
/éãã
@nikopen ïŒPRã¯1.17çšã®ããã§ããïŒ 1.16.1ã®å€æŽãã°ã«PRçªå·ãèŠã€ãããŸããã
1.14ã®å€æŽãã°ã«ãã®åé¡ãèšèŒãããŠããŸããã ããã¯ïŒãŸã ïŒãã§ãªãŒããã¯ã§ã¯ãªãã£ãããšãæå³ããŸããïŒ ãªããŸããïŒ
PLEGããã®å埩ã¯å¥åº·çãªåé¡ã§ã¯ãããŸãã
systemctl disable docker && systemctl disable kubelet
ãªããŒã
rm -rf / var / lib / kubelet / pods /
rm -rf / var / lib / docker
systemctl start docker && systemctl enable docker
systemctl status docker
docker load -i xxx.tar
systemctl start kubelet && systemctl enable kubelet
systemctl status kubelet
@ jackie-qiuåé¡ãäºåºŠãšèµ·ãããªãããã«ããããã«ãæ抎匟ã§ãµãŒããŒãçç Žãããã10éããããããããããšããå§ãããŸã...
ãã©ã³ãã«ãããã¯ãŒã¯ã§å®è¡ãããv1.15.6ãšåãåé¡ã
ãã¹ãŠããã§ã«ããã«æžãããŠããããã«èŠããã®ã§ãåé¡ã®åå ã«ã€ããŠè¿œå ããããšã¯ããŸããããŸããã å€ãããŒãžã§ã³ã®ãµãŒããŒ1.10.13ã䜿çšããŠããŸãã ã¢ããã°ã¬ãŒããè©Šã¿ãŸããããããã»ã©ç°¡åãªããšã§ã¯ãããŸããã
ç§ãã¡ã«ãšã£ãŠãããã¯äž»ã«æ¬çªç°å¢ã®1ã€ã§çºçããéçºç°å¢ã®éåžžã«åŸæ¹ã§çºçããŸãã åžžã«è€è£œãããæ¬çªç°å¢ã§ã¯ãããŒãªã³ã°æŽæ°äžã«ã®ã¿çºçããç¹å®ã®2ã€ã®ãããã«å¯ŸããŠã®ã¿çºçããŸãïŒããŒãªã³ã°æŽæ°äžã«ä»ã®ããããåé€ãããããšã¯ãããŸããïŒã ç§ãã¡ã®éçºç°å¢ã§ã¯ãä»ã®ãããã§ãçºçããŸããã
ãã°ã«è¡šç€ºãããã®ã¯æ¬¡ã®ãšããã§ãã
æåããå ŽåïŒ
11æ27æ¥11ïŒ34ïŒ45ip-172-31-174-8 kubelet [8024]ïŒ2019-11-27 11ïŒ34ïŒ45.453 [INFO] [1946] client.go 202ïŒç°å¢ããæ§æãèªã¿èŸŒãã§ããŸã
11æ27æ¥11ïŒ34ïŒ45ip-172-31-174-8 kubelet [8024]ïŒ2019-11-27 11ïŒ34ïŒ45.454 [INFO] [1946] calico-ipam.go 249ïŒhandleIDã䜿çšããŠã¢ãã¬ã¹ã解æŸããhandleID = ãk8s-pod-network.e923743c5dc4833e606bf16f388c564c20c4c1373b18881d8ea1c8eb617f6e62ãworkloadID = "default.good-pod-name-557644b486-7rxw5"
11æ27æ¥11ïŒ34ïŒ45ip-172-31-174-8 kubelet [8024]ïŒ2019-11-27 11ïŒ34ïŒ45.454 [INFO] [1946] ipam.go 738ïŒãã³ãã« 'k8s-ã§ãã¹ãŠã®IPã解æŸããpod-network.e923743c5dc4833e606bf16f388c564c20c4c1373b18881d8ea1c8eb617f6e62 '
11æ27æ¥11ïŒ34ïŒ45ip-172-31-174-8 kubelet [8024]ïŒ2019-11-27 11ïŒ34ïŒ45.498 [INFO] [1946] ipam.go 877ïŒãã¯ãªã¡ã³ãããããã³ãã« 'k8s-pod-network .e923743c5dc4833e606bf16f388c564c20c4c1373b18881d8ea1c8eb617f6e62'by 1
11æ27æ¥11ïŒ34ïŒ45ip-172-31-174-8 kubelet [8024]ïŒ2019-11-27 11ïŒ34ïŒ45.498 [INFO] [1946] calico-ipam.go 257ïŒhandleIDã䜿çšããŠã¢ãã¬ã¹ã解æŸhandleID = ãk8s-pod-network.e923743c5dc4833e606bf16f388c564c20c4c1373b18881d8ea1c8eb617f6e62ãworkloadID = "default.good-pod-name-557644b486-7rxw5"
11æ27æ¥11ïŒ34ïŒ45ip-172-31-174-8 kubelet [8024]ïŒ2019-11-27 11ïŒ34ïŒ45.498 [INFO] [1946] calico-ipam.go 261ïŒworkloadIDã䜿çšããŠã¢ãã¬ã¹ã解æŸããhandleID = ãk8s-pod-network.e923743c5dc4833e606bf16f388c564c20c4c1373b18881d8ea1c8eb617f6e62ãworkloadID = "default.good-pod-name-557644b486-7rxw5"
11æ27æ¥11ïŒ34ïŒ45ip-172-31-174-8 kubelet [8024]ïŒ2019-11-27 11ïŒ34ïŒ45.498 [INFO] [1946] ipam.go 738ïŒãã³ãã«ãããã©ã«ãã®ãã¹ãŠã®IPã解æŸããŸãã good-pod-name-557644b486-7rxw5 '
11æ27æ¥11ïŒ34ïŒ45ip-172-31-174-8 kubelet [8024]ïŒnetns / proc / 6337 / ns / netã®CalicoCNIåé€ããã€ã¹
11æ27æ¥11ïŒ34ïŒ45ip-172-31-174-8 kubelet [8024]ïŒ2019-11-27 11ïŒ34ïŒ45.590 [INFO] [1929] k8s.go 379ïŒãã£ã¢ããŠã³åŠçãå®äºããŸããã Workload = "default.good-pod-name-557644b486-7rxw5" "
倱æããå ŽåïŒ
11æ27æ¥11ïŒ46ïŒ49ip-172-31-174-8 kubelet [8024]ïŒ2019-11-27 11ïŒ46ïŒ49.681 [INFO] [5496] client.go 202ïŒç°å¢ããæ§æãèªã¿èŸŒãã§ããŸã
11æ27æ¥11ïŒ46ïŒ49ip-172-31-174-8 kubelet [8024]ïŒ2019-11-27 11ïŒ46ïŒ49.681 [INFO] [5496] calico-ipam.go 249ïŒhandleIDã䜿çšããŠã¢ãã¬ã¹ã解æŸããhandleID = ãk8s-pod-network.3afc7f2064dc056cca5bb8c8ff20c81aaf6ee8b45a1346386c239b92527b945bãworkloadID = "default.bad-pod-name-5fc88df4b-rkw7m"
11æ27æ¥11ïŒ46ïŒ49ip-172-31-174-8 kubelet [8024]ïŒ2019-11-27 11ïŒ46ïŒ49.681 [INFO] [5496] ipam.go 738ïŒãã³ãã« 'k8s-ã§ãã¹ãŠã®IPã解æŸããpod-network.3afc7f2064dc056cca5bb8c8ff20c81aaf6ee8b45a1346386c239b92527b945b '
11æ27æ¥11ïŒ46ïŒ49ip-172-31-174-8 kubelet [8024]ïŒ2019-11-27 11ïŒ46ïŒ49.716 [INFO] [5496] ipam.go 877ïŒãã¯ãªã¡ã³ãããããã³ãã« 'k8s-pod-network .3afc7f2064dc056cca5bb8c8ff20c81aaf6ee8b45a1346386c239b92527b945b'by 1
11æ27æ¥11ïŒ46ïŒ49ip-172-31-174-8 kubelet [8024]ïŒ2019-11-27 11ïŒ46ïŒ49.716 [INFO] [5496] calico-ipam.go 257ïŒhandleIDã䜿çšããŠã¢ãã¬ã¹ã解æŸhandleID = ãk8s-pod-network.3afc7f2064dc056cca5bb8c8ff20c81aaf6ee8b45a1346386c239b92527b945bãworkloadID = "default.bad-pod-name-5fc88df4b-rkw7m"
11æ27æ¥11ïŒ46ïŒ49ip-172-31-174-8 kubelet [8024]ïŒ2019-11-27 11ïŒ46ïŒ49.716 [INFO] [5496] calico-ipam.go 261ïŒworkloadIDã䜿çšããŠã¢ãã¬ã¹ã解æŸããhandleID = ãk8s-pod-network.3afc7f2064dc056cca5bb8c8ff20c81aaf6ee8b45a1346386c239b92527b945bãworkloadID = "default.bad-pod-name-5fc88df4b-rkw7m"
11æ27æ¥11ïŒ46ïŒ49ip-172-31-174-8 kubelet [8024]ïŒ2019-11-27 11ïŒ46ïŒ49.716 [INFO] [5496] ipam.go 738ïŒãã³ãã«ãããã©ã«ãã®ãã¹ãŠã®IPã解æŸããŸãã bad-pod-name-5fc88df4b-rkw7m '
11æ27æ¥11ïŒ46ïŒ49ip-172-31-174-8 kubelet [8024]ïŒnetns / proc / 7376 / ns / netã®CalicoCNIåé€ããã€ã¹
11æ27æ¥11 ïŒ46 ïŒ51ip-172-31-174-8 ntpd [8188]ïŒã€ã³ã¿ãŒãã§ã€ã¹ïŒ1232ã®åé€cali8e016aaff48ãfe80 :: ïŒeeffïŒfeeeïŒeeeeïŒ 816ïŒ123 ãã€ã³ã¿ãŒãã§ã€ã¹çµ±èšïŒåä¿¡= 0ãéä¿¡= 0ãdropped = 0ãactive_time = 242773ç§
11æ27æ¥11ïŒ46ïŒ59ip-172-31-174-8ã«ãŒãã«ïŒ[11155281.312094] unregister_netdeviceïŒeth0ã解æŸãããã®ãåŸ ã£ãŠããŸãã 䜿çšåæ°= 1
誰ããv1.16ã«ã¢ããã°ã¬ãŒãããŸãããïŒ ãããä¿®æ£ãããPLEGã®åé¡ãçºçããŠããªããã©ããã誰ãã確èªã§ããŸããïŒ ãã®åé¡ã¯æ¬çªç°å¢ã§é »ç¹ã«çºçããå¯äžã®ãªãã·ã§ã³ã¯ããŒããåèµ·åããããšã§ãã
ä¿®æ£ã«ã€ããŠè³ªåããããŸãã
ã¿ã€ã ã¢ãŠãä¿®æ£ãå«ãæ°ããããŒãžã§ã³ãã€ã³ã¹ããŒã«ããŠãããšããŸãããã ã¯ãã¬ããã解æŸãããçµäºç¶æ
ã§ã¹ã¿ãã¯ããŠããããããããŠã³ããããšãèš±å¯ããããšãç解ããŠããŸãããeth0ã解æŸããŸããïŒ æ°ãããããã¯ãã®ããŒãã§å®è¡ã§ããŸããããããšãæºåå®äº/æºåå®äºç¶æ
ã®ãŸãŸã«ãªããŸããïŒ
ç§ã®å ŽåãDocker 19.03.4ã¯ãäž¡æ¹ã®ããããçµäºç¶æ
ã§ã¹ã¿ãã¯ããããŒããPLEGã®åé¡ã§Ready / NotReadyéã§ãã©ããããåé¡ãä¿®æ£ããŸããã
Kubernetesã®ããŒãžã§ã³ã¯1.15.6ããå€æŽãããŠããŸããã ã¯ã©ã¹ã¿ãŒã§ã®å¯äžã®å€æŽã¯ãæ°ããDockerã§ããã
Ubuntu16.04ã®ã«ãŒãã«ã4.4ãã4.15ã«ã¢ããã°ã¬ãŒãããŸããã ãšã©ãŒãåçºãããŸã§ã«3æ¥ããããŸããã
ubuntu 16.04ã§hakmanãææ¡ããããã«ãDockerã®ããŒãžã§ã³ã17ãã19ã«ã¢ããã°ã¬ãŒãã§ãããã©ããã確èªããŸãã
Ubuntuã®ããŒãžã§ã³ãã¢ããã°ã¬ãŒãããããªãã
k8s1.10ã§dockerã19ã«ã¢ããã°ã¬ãŒãããæ¹æ³ã¯ãããŸããã æåã«1.15ã«ã¢ããã°ã¬ãŒãããå¿ èŠããããŸããã1.15海峡ã«ã¢ããã°ã¬ãŒãããæ¹æ³ããªãããããã°ããæéãããããŸãã 1.10-> 1.11-> 1.12ãªã©ã1ã€ãã€ã¢ããã°ã¬ãŒãããå¿ èŠããããŸãã
PLEGãã«ã¹ãã§ãã¯ã¯ã»ãšãã©è¡ããŸããã ãã¹ãŠã®å埩ã§ã
docker ps
ãåŒã³åºããŠã³ã³ãããŒã®ç¶æ ã®å€åãæ€åºããdocker ps
ãšinspect
ãåŒã³åºããŠãããã®ã³ã³ãããŒã®è©³çŽ°ãååŸããŸãã
åå埩ãçµäºãããšãã¿ã€ã ã¹ã¿ã³ããæŽæ°ãããŸãã ã¿ã€ã ã¹ã¿ã³ãããã°ããïŒã€ãŸã3åéïŒæŽæ°ãããŠããªãå Žåããã«ã¹ãã§ãã¯ã¯å€±æããŸããPLEGã3åã§ããããã¹ãŠãå®äºã§ããªãèšå€§ãªæ°ã®ããããããŒãã«ããŒããããŠããªãéãïŒããã¯çºçããªãã¯ãã§ãïŒãæãå¯èœæ§ã®é«ãåå ã¯Dockerãé ãããšã§ãã ããŸã«
docker ps
å°åæã§ããã芳å¯ã§ããªããããããŸããããããã¯ããããªããšããæå³ã§ã¯ãããŸããããäžå¥åº·ãã¹ããŒã¿ã¹ãå ¬éããªããšããŠãŒã¶ãŒããå€ãã®åé¡ãé ãããããã«å€ãã®åé¡ãçºçããå¯èœæ§ããããŸãã ããšãã°ãkubeletã¯å€æŽã«ã¿ã€ã ãªãŒã«åå¿ãããããã«æ··ä¹±ãæããŸãã
ããããããããã°å¯èœã«ããæ¹æ³ã«é¢ããææ¡ãæè¿ããŸã...
ããã¯é·å¹Žã®åé¡ïŒk8s 1.6ïŒïŒã§ãããk8sã䜿çšããããªãã®æ°ã®äººã ãæ©ãŸããŠããŸããã
éè² è·ã®ããŒãïŒæ倧CPUïŒ ãioãå²ã蟌ã¿ïŒãšã¯å¥ã«ãPLEGã®åé¡ã¯ãkubeletãdockerãloggingãnetworkingãªã©ã®éã®åŸ®åŠãªåé¡ã«ãã£ãŠåŒãèµ·ããããããšããããåé¡ã®ä¿®æ£ã¯æ®é ·ãªå ŽåããããŸãïŒãã¹ãŠã®ããŒãã®åèµ·åãªã©ãã±ãŒã¹ïŒã
å ã®æçš¿ã«é¢ããéããïŒ71653ãæçµçã«ããŒãžãããkubeletãæŽæ°ãããCNIãªã¯ãšã¹ããã¿ã€ã ã¢ãŠãããŠãæéãè¶ ããåã«ã³ã³ããã¹ãããã£ã³ã»ã«ã§ããããã«ãªããŸããã
Kubernetes1.16ã«ä¿®æ£ãå«ãŸããŸãã
ãŸããPRãéããŠãããã1.14ãš1.15ã«æ»ããŸããããã¯ãæ°ããã¿ã€ã ã¢ãŠãæ©èœïŒ> = 0.7.0ïŒãå«ãCNIããŒãžã§ã³ãããããã§ãã 1.13ã«ã¯ããã®æ©èœã®ãªãå€ãCNIvããããŸãããããã£ãŠãããã¯æçµçã«éããããšãã§ããŸãã
/éãã
ç§ã¯æ··ä¹±ããŠããŸã...ãããé ãdockerããŒã¢ã³ã«ãã£ãŠåŒãèµ·ããããå¯èœæ§ãããå ŽåãcniåŒã³åºãã«ã¿ã€ã ã¢ãŠããè¿œå ããã ãã§ä¿®æ£ã§ããã®ã¯ãªãã§ããïŒ
containerd + kubernetes 1.16ã䜿çšããŠããŸãããããŒãããšã«191åã®ã³ã³ãããŒãããå Žåã§ããããã¯ç°¡åã«çºçããŸãã ãããå€ãäžããŠã¿ãŸãããïŒ ãŸãã¯ããè¯ã解決çã¯ãããŸããïŒ @yujuhong
@haosdentä¿®æ£ãã䜿çšã®ããŒãžã§ã³ã®Kubernetesã«ããŒãžãããŠãããã©ããã確èªããŸãã 1.16ã«ãªã£ãŠããå Žåã¯ãææ°ã®ãªãªãŒã¹ã§ããå¿ èŠããããŸãã ãŸãã¯ã1.17ã«ã¢ããã°ã¬ãŒããããšã100ïŒ ã«ãªããŸãã
@haosdentãšåã質åããã£ã
ãããã£ãŠãv1.16.7ãŸãã¯v1.17.0ã¯ããã®ä¿®æ£ãååŸããããã«å¿ èŠãªæå°ã®k8sãªãªãŒã¹ã®ããã§ãã
cilium vââ1.6.5ã䜿çšããŠãkops debianã€ã¡ãŒãžã4.19ã«ã¢ããã°ã¬ãŒãããã«ãŒãã«ã䜿çšããŠãkopsã«ãã£ãŠããããžã§ãã³ã°ãããAWSã§æå°éã®è² è·ã§v1.16.7
ãå®è¡ããŠããŸãã
ïŒman_shruggingïŒããã§ãŸã ããã«ãããŸãïŒ/
ããããããã«èª¿æ»ããå¿
èŠããããŸãã
_sidenote_ã¯ãkubesprayã«ãã£ãŠv1.16.4
ããããžã§ãã³ã°ãããubuntuã§ãçºçããŸãã
ä»ã®ãšãããããŒããåèµ·åãããšãçæéã§è§£æ±ºãããŸãã
c5.large
ec2ããŒãã§ã®ã¿çºçããŸãã
Dockerã¯ã©ã¡ãã®å Žåã18.04ã§ããã ãããã£ãŠãäžèšã®ããã«dockerã19.03.4
ã«ã¢ããã°ã¬ãŒãããããšããŸãã
ãã®åé¡ã¯ãå€ãããŒãžã§ã³ã®systemdãåå ã§ããå¯èœæ§ããããŸããsystemdãã¢ããã°ã¬ãŒãããŠã¿ãŠãã ããã
åç
§ïŒ
https://my.oschina.net/yunqi/blog/3041189 ïŒäžåœèªã®ã¿ïŒ
https://github.com/lnykryn/systemd-rhel/pull/322
ãã®åé¡ã¯1.16.8+ docker18.06.2ã§ãèŠãããŸã
# docker info
Containers: 186
Running: 155
Paused: 0
Stopped: 31
Images: 48
Server Version: 18.06.2-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: nvidia runc
Default Runtime: nvidia
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 6635b4f0c6af3810594d2770f662f34ddc15b40d-dirty (expected: 69663f0bd4b60df09991c08812a60108003fa340)
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.0.0-1027-aws
Operating System: Ubuntu 18.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 48
Total Memory: 373.8GiB
Name: node-cmp-test-kubecluster-2-0a03fdfa
ID: E74R:BMMI:XOFX:BK4X:53AT:JQLZ:CDF6:M6X7:J56G:2DTZ:OTRK:5OJB
Docker Root Dir: /mnt/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: true
WARNING: No swap limit support
PLEGãæ£åžžã§ãªããããŒãããã©ããããåã«ãdockerããã¡ã€ã«ãœã±ãããžã®æžã蟌ã¿ã§ã¿ã€ã ã¢ãŠãã«ééããå¯èœæ§ãããããšãåç §ããŠãã ããã ãã®å Žåãã«ãŒãã«ã¯ã¹ã¿ãã¯ããããã»ã¹ã匷å¶çµäºã§ããããŒãã¯å埩ã§ããŸãã ããããä»ã®å€ãã®å ŽåãããŒãã¯å埩ã§ãããSSHæ¥ç¶ããã§ããªããããããŸããŸãªåé¡ã®çµã¿åããã§ããå¯èœæ§ããããŸãã
æ倧ã®åé¡ç¹ã®1ã€ã¯ããã©ãããã©ãŒã ãããã€ããŒãšããŠãPLEGããç°åžžããšããŠå ±åãããåã«ãDockerãééã£ãæ¹åã«é²ãå¯èœæ§ããããããåé¡ãäºåã«æ€åºããŠãŠãŒã¶ãŒã®æ··ä¹±ãã¯ãªãŒã³ã¢ããããã®ã§ã¯ãªããåžžã«ãŠãŒã¶ãŒããšã©ãŒãå ±åããããšã§ãã åé¡ãçºçãããšãã¡ããªãã¯ã®2ã€ã®èå³æ·±ãçŸè±¡ïŒ
Dockerã¡ããªãã¯ã調ã¹ãŠãã¢ã©ãŒããèšå®ã§ãããã©ããã確èªããŠããŸãã
May 8 16:32:25 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:32:25Z" level=info msg="shim reaped" id=522fbf813ab6c63b17f517a070a5ebc82df7c8f303927653e466b2d12974cf45
--
May 8 16:32:25 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:32:25.557712045Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 8 16:32:26 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:32:26.204921094Z" level=warning msg="Your kernel does not support swap limit capabilities,or the cgroup is not mounted. Memory limited without swap."
May 8 16:32:26 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:32:26Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/679b08e796acdd04b40802f2feff8086d7ba7f96182dcf874bb652fa9d9a7aec/shim.sock" debug=false pid=6592
May 8 16:32:26 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:32:26Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/2ef0c4109b9cd128ae717d5c55bbd59810f88f3d8809424b620793729ab304c3/shim.sock" debug=false pid=6691
May 8 16:32:26 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:32:26.871411364Z" level=warning msg="Your kernel does not support swap limit capabilities,or the cgroup is not mounted. Memory limited without swap."
May 8 16:32:26 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:32:26Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/905b3c35be073388e3c037da65fe55bdb4f4b236b86dcf1e1698d6987dfce28c/shim.sock" debug=false pid=6790
May 8 16:32:27 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:32:27Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/b4e6991f9837bf82533569d83a942fd8f3ae9fa869d5a0e760a967126f567a05/shim.sock" debug=false pid=6884
May 8 16:32:42 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:32:42.409620423Z" level=warning msg="Your kernel does not support swap limit capabilities,or the cgroup is not mounted. Memory limited without swap."
May 8 16:37:28 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:37:27Z" level=info msg="shim reaped" id=2ef0c4109b9cd128ae717d5c55bbd59810f88f3d8809424b620793729ab304c3
May 8 16:37:28 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:37:28.400830650Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 8 16:37:30 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:37:29Z" level=info msg="shim reaped" id=905b3c35be073388e3c037da65fe55bdb4f4b236b86dcf1e1698d6987dfce28c
May 8 16:37:30 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:37:30.316345816Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 8 16:37:30 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:37:30Z" level=info msg="shim reaped" id=b4e6991f9837bf82533569d83a942fd8f3ae9fa869d5a0e760a967126f567a05
May 8 16:37:30 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:37:30.931134481Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 8 16:37:35 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:37:35Z" level=info msg="shim reaped" id=679b08e796acdd04b40802f2feff8086d7ba7f96182dcf874bb652fa9d9a7aec
May 8 16:37:36 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:37:36.747358875Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 8 16:39:31 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63281.723692] mybr0: port 2(veth3f150f6c) entered disabled state
May 8 16:39:31 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63281.752694] device veth3f150f6c left promiscuous mode
May 8 16:39:31 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63281.756449] mybr0: port 2(veth3f150f6c) entered disabled state
May 8 16:39:35 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:39:34Z" level=info msg="shim reaped" id=fa731d8d33f9d5a8aef457e5dab43170c1aedb529ce9221fd6d916a4dba07ff1
May 8 16:39:35 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:39:35.106265137Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.505842] INFO: task dockerd:7970 blocked for more than 120 seconds.
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.510931] Not tainted 5.0.0-1019-aws #21~18.04.1-Ubuntu
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.515010] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.521419] dockerd D 0 7970 1 0x00000080
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.525333] Call Trace:
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.528060] __schedule+0x2c0/0x870
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.531107] schedule+0x2c/0x70
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.534027] rwsem_down_write_failed+0x157/0x350
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.537630] ? blk_finish_plug+0x2c/0x40
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.540890] ? generic_writepages+0x68/0x90
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.544296] call_rwsem_down_write_failed+0x17/0x30
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.547999] ? call_rwsem_down_write_failed+0x17/0x30
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.551674] down_write+0x2d/0x40
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.554612] sync_inodes_sb+0xb9/0x2c0
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.557762] ? __filemap_fdatawrite_range+0xcd/0x100
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.561468] __sync_filesystem+0x1b/0x60
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.564697] sync_filesystem+0x3c/0x50
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.568544] ovl_sync_fs+0x3f/0x60 [overlay]
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.572831] __sync_filesystem+0x33/0x60
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.576767] sync_filesystem+0x3c/0x50
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.580565] generic_shutdown_super+0x27/0x120
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.584632] kill_anon_super+0x12/0x30
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.587958] deactivate_locked_super+0x48/0x80
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.591696] deactivate_super+0x40/0x60
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.594998] cleanup_mnt+0x3f/0x90
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.598081] __cleanup_mnt+0x12/0x20
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.601194] task_work_run+0x9d/0xc0
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.604388] exit_to_usermode_loop+0xf2/0x100
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.607843] do_syscall_64+0x107/0x120
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.611173] entry_SYSCALL_64_after_hwframe+0x44/0xa9
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.615128] RIP: 0033:0x556561f280e0
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.618303] Code: Bad RIP value.
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.621256] RSP: 002b:000000c428ec51c0 EFLAGS: 00000206 ORIG_RAX: 00000000000000a6
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.627790] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000556561f280e0
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.632469] RDX: 0000000000000000 RSI: 0000000000000002 RDI: 000000c4268a0d20
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.637203] RBP: 000000c428ec5220 R08: 0000000000000000 R09: 0000000000000000
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.641900] R10: 0000000000000000 R11: 0000000000000206 R12: ffffffffffffffff
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.646535] R13: 0000000000000024 R14: 0000000000000023 R15: 0000000000000055
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.651404] INFO: task dockerd:33393 blocked for more than 120 seconds.
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.655956] Not tainted 5.0.0-1019-aws #21~18.04.1-Ubuntu
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.660155] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.666562] dockerd D 0 33393 1 0x00000080
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.670561] Call Trace:
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.673299] __schedule+0x2c0/0x870
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.676435] schedule+0x2c/0x70
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.679556] rwsem_down_write_failed+0x157/0x350
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.683276] ? blk_finish_plug+0x2c/0x40
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.686744] ? generic_writepages+0x68/0x90
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.690442] call_rwsem_down_write_failed+0x17/0x30
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.694243] ? call_rwsem_down_write_failed+0x17/0x30
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.698019] down_write+0x2d/0x40
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.700996] sync_inodes_sb+0xb9/0x2c0
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.704283] ? __filemap_fdatawrite_range+0xcd/0x100
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.708127] __sync_filesystem+0x1b/0x60
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.711511] sync_filesystem+0x3c/0x50
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.714806] ovl_sync_fs+0x3f/0x60 [overlay]
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.718349] __sync_filesystem+0x33/0x60
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.721665] sync_filesystem+0x3c/0x50
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.724860] generic_shutdown_super+0x27/0x120
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.728449] kill_anon_super+0x12/0x30
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.731817] deactivate_locked_super+0x48/0x80
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.735511] deactivate_super+0x40/0x60
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.738899] cleanup_mnt+0x3f/0x90
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.742023] __cleanup_mnt+0x12/0x20
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.745142] task_work_run+0x9d/0xc0
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.748337] exit_to_usermode_loop+0xf2/0x100
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.751830] do_syscall_64+0x107/0x120
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.755145] entry_SYSCALL_64_after_hwframe+0x44/0xa9
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.759111] RIP: 0033:0x556561f280e0
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.762292] Code: Bad RIP value.
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.765237] RSP: 002b:000000c4289c51c0 EFLAGS: 00000206 ORIG_RAX: 00000000000000a6
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.771715] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000556561f280e0
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.776351] RDX: 0000000000000000 RSI: 0000000000000002 RDI: 000000c4252e5e60
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.781025] RBP: 000000c4289c5220 R08: 0000000000000000 R09: 0000000000000000
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.785705] R10: 0000000000000000 R11: 0000000000000206 R12: ffffffffffffffff
May 8 16:42:12 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b kernel: [63442.790445] R13: 0000000000000052 R14: 0000000000000051 R15: 0000000000000055
May 8 16:43:40 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:43:40.153619029Z" level=error msg="Handler for GET /containers/679b08e796acdd04b40802f2feff8086d7ba7f96182dcf874bb652fa9d9a7aec/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
May 8 16:43:40 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: http: multiple response.WriteHeader calls
May 8 16:44:15 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:44:15.461023232Z" level=error msg="Handler for GET /containers/fa731d8d33f9d5a8aef457e5dab43170c1aedb529ce9221fd6d916a4dba07ff1/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
May 8 16:44:15 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:44:15.461331976Z" level=error msg="Handler for GET /containers/fa731d8d33f9d5a8aef457e5dab43170c1aedb529ce9221fd6d916a4dba07ff1/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
May 8 16:44:15 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: http: multiple response.WriteHeader calls
May 8 16:44:15 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: http: multiple response.WriteHeader calls
May 8 16:59:55 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:59:55.489826112Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]"
May 8 16:59:55 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:59:55.489858794Z" level=info msg="IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844]"
May 8 16:59:55 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:59:55Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/5b85357b1e7b41f230a05d65fc97e6bdcf10537045db2e97ecbe66a346e40644/shim.sock" debug=false pid=5285
May 8 16:59:57 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:59:57Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/89c6e4f2480992f94e3dbefb1cbe0084a8e5637588296a1bb40df0dcca662cf0/shim.sock" debug=false pid=6776
May 8 16:59:58 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c414b dockerd[1747]: time="2020-05-08T16:59:58Z" level=info msg="shim reaped" id=89c6e4f2480992f94e3dbefb1cbe0084a8e5637588296a1bb40df0dcca662cf0
ç§ãã¡ã®ããã«ãããåŒãèµ·ããããã®ãå
±æãããã ãã§ãã
ã³ã³ãããå®è¡ããŠãæ倧3æ¥éã§å€ãã®ããã»ã¹ããçæãããæ倧ã«éããŸããã ããã«ãããæ°ããããã»ã¹ãçæã§ããªãã£ããããã·ã¹ãã ãå®å
šã«ããªãŒãºããŸããïŒãã®åŸãPLEGèŠåãçºçããŸããïŒã
ã ããç§ãã¡ã«ãšã£ãŠã¯ç¡é¢ä¿ãªåé¡ã§ãã ãã¹ãŠã®å©ããããããšãïŒ+1ïŒ
ç§ãæ±ããŠããåé¡ã¯2ã€ãããããããé¢é£ããŠããŸããã
- ãã¹ãã ãããã¯ãªããªã£ããšæããŸãããå®å šã«èªä¿¡ãæã£ãŠååãªã¯ã©ã¹ã¿ãŒãåäœæããŠããŸããã ç§ã¯ãããå®çŸããããã«_çŽæ¥_å€æŽãããšã¯æããŸããã
- ã³ã³ãããäœã«ãæ¥ç¶ã§ããªããšããç¹ãæ¹ã®åé¡ã
äžå¯©ãªããšã«ãplegã®ãã¹ãŠã®åé¡ã¯ããŠã£ãŒããããã¯ãŒã¯ã®åé¡ãšåæã«çºçããŸããã
Bryan @ weaveworksã¯ãcoreosã®åé¡ãææããŠãããŸããã CoreOSã¯ãããªããžããã¹ãåºæ¬çã«ãã¹ãŠã管çããããšããããªãç©æ¥µçãªåŸåããããŸãã
lo
ãšå®éã«ã¯ãã¹ãäžã®ç©çã€ã³ã¿ãŒãã§ã€ã¹ãé€ããŠãCoreOSããããå®è¡ã§ããªãããã«ãããšããã¹ãŠã®åé¡ãæ®ããŸããã人ã ã¯ãŸã coreosã®å®è¡ã«åé¡ãæ±ããŠããŸããïŒ
@deitchã§è¡ã£ãå€æŽãèŠããŠããŸããïŒ
ç§ã¯ãããèŠã€ããŸããïŒ https ïŒ
ããã¯@deitchãææ¡ããããšã«é¢é£ããŠããå¯èœæ§ããããŸãã ããããveth *ã䜿çšããŠãŠããããäœæããããã管çãããŠããªããã®ãšããŠé 眮ãããªã©ãé©åãªãœãªã¥ãŒã·ã§ã³ãŸãã¯ãããšã¬ã¬ã³ããªãœãªã¥ãŒã·ã§ã³ããããã©ãããç¥ãããã§ãã
ããã§èŠãåé¡ã®æ ¹æ¬çãªåå ã¯ããã£ããšæããŸãã
dockerã¯ãdockerpsãšdockerinspectã®éã§æ··ä¹±ããããšããããŸããã³ã³ãããŒã®ç Žæ£äžã«ãdocker psã¯ãã·ã ãæ¢ã«åãåãããŠããã³ã³ãããŒãå«ããã³ã³ãããŒã«é¢ãããã£ãã·ã¥æ å ±ã衚瀺ã§ããŸãã
time="2020-06-01T23:39:03Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121/shim.sock" debug=false pid=11377
Jun 02 03:23:06 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 dockerd[1731]: time="2020-06-02T03:23:06Z" level=info msg="shim reaped" id=b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121
Jun 02 03:23:36 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 dockerd[1731]: time="2020-06-02T03:23:36.433087181Z" level=info msg="Container b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121 failed to exit within 30 seconds of signal 15 - using the force"
psã¯ã³ã³ããIDã®ããã»ã¹ãèŠã€ããããšãã§ããŸãã
# ps auxww | grep b7ae92902520
root 21510 0.0 0.0 14852 1000 pts/0 S+ 03:44 0:00 grep --color=auto b7ae92902520
docker psã¯ãããã»ã¹ããŸã 皌åäžã§ããããšã瀺ããŠããŸã
# docker ps -a | grep b7ae92902520
b7ae92902520 450280d6866c "/srv/envoy-discoverâŠ" 4 hours ago Up 4 hours k8s_xxxxxx
ãã®ãããªå Žåãdocker inspectã®ããã«docker sockããã€ã€ã«ãããšã¹ã¿ãã¯ããã¯ã©ã€ã¢ã³ãåŽã®ã¿ã€ã ã¢ãŠããçºçããŸãã ããã¯ãããããdocker psããã£ãã·ã¥ãããããŒã¿ã䜿çšããŠããã®ã«å¯Ÿããdockerinspectãåãåãããã·ã ã«ãã€ã€ã«ããŠcontainerdããã®ææ°æ å ±ãååŸãããšããäºå®ãåå ã§ãã
# strace docker inspect b7ae92902520
......
newfstatat(AT_FDCWD, "/etc/.docker/config.json", {st_mode=S_IFREG|0644, st_size=124, ...}, 0) = 0
openat(AT_FDCWD, "/etc/.docker/config.json", O_RDONLY|O_CLOEXEC) = 3
epoll_ctl(4, EPOLL_CTL_ADD, 3, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=2124234496, u64=139889209065216}}) = -1 EPERM (Operation not permitted)
epoll_ctl(4, EPOLL_CTL_DEL, 3, 0xc420689884) = -1 EPERM (Operation not permitted)
read(3, "{\n \"credsStore\": \"ecr-login\","..., 512) = 124
close(3) = 0
futex(0xc420650948, FUTEX_WAKE, 1) = 1
socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3
setsockopt(3, SOL_SOCKET, SO_BROADCAST, [1], 4) = 0
connect(3, {sa_family=AF_UNIX, sun_path="/var/run/docker.sock"}, 23) = 0
epoll_ctl(4, EPOLL_CTL_ADD, 3, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=2124234496, u64=139889209065216}}) = 0
getsockname(3, {sa_family=AF_UNIX}, [112->2]) = 0
getpeername(3, {sa_family=AF_UNIX, sun_path="/var/run/docker.sock"}, [112->23]) = 0
futex(0xc420644548, FUTEX_WAKE, 1) = 1
read(3, 0xc4202c2000, 4096) = -1 EAGAIN (Resource temporarily unavailable)
write(3, "GET /_ping HTTP/1.1\r\nHost: docke"..., 83) = 83
futex(0xc420128548, FUTEX_WAKE, 1) = 1
futex(0x25390a8, FUTEX_WAIT, 0, NULL) = 0
futex(0x25390a8, FUTEX_WAIT, 0, NULL) = 0
futex(0x25390a8, FUTEX_WAIT, 0, NULL) = -1 EAGAIN (Resource temporarily unavailable)
futex(0x25390a8, FUTEX_WAIT, 0, NULL^C) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
strace: Process 13301 detached
ãããã®åãªã¹ãã«ã¯ããã¹ãŠã®ãããã®ãã¹ãŠã®ã³ã³ãããŒã®Dockeræ€æ»ãå«ãŸããããããã®ãããªã¿ã€ã ã¢ãŠãã«ãããPLEGã®åãªã¹ãå šäœãé·æéç¶ãããšã«ãªããŸãã
Jun 2 04:37:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:03.523247 28263 generic.go:189] GenericPLEG: Relisting
Jun 2 04:37:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:03.541890 28263 generic.go:153] GenericPLEG: f0118c7e-82cb-4825-a01b-3014fe500e1f/51f959aa0c4cbcbc318c3fad7f90e5e967537e0acc8c727b813df17c50493af3: non-existent -> exited
Jun 2 04:37:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:03.541905 28263 generic.go:153] GenericPLEG: f0118c7e-82cb-4825-a01b-3014fe500e1f/6c221cd2fb602fdf4ae5288f2ce80d010cf252a9144d676c8ce11cc61170a4cf: non-existent -> exited
Jun 2 04:37:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:03.541909 28263 generic.go:153] GenericPLEG: f0118c7e-82cb-4825-a01b-3014fe500e1f/47bb03e0b56d55841e0592f94635eb67d5432edb82424fc23894cdffd755e652: non-existent -> exited
Jun 2 04:37:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:03.541913 28263 generic.go:153] GenericPLEG: f0118c7e-82cb-4825-a01b-3014fe500e1f/ee861fac313fad5e0c69455a807e13c67c3c211032bc499ca44898cde7368960: non-existent -> exited
Jun 2 04:37:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:03.541917 28263 generic.go:153] GenericPLEG: f0118c7e-82cb-4825-a01b-3014fe500e1f/b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121: non-existent -> running
Jun 2 04:37:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:03.541922 28263 generic.go:153] GenericPLEG: f0118c7e-82cb-4825-a01b-3014fe500e1f/dd3f5c03f7309d0a3feb2f9e9f682b4c30ac4105a245f7f40b44afd7096193a0: non-existent -> exited
Jun 2 04:37:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:03.541925 28263 generic.go:153] GenericPLEG: f0118c7e-82cb-4825-a01b-3014fe500e1f/57960fe13240af78381785cc66c6946f78b8978985bc847a1f77f8af8aef0f54: non-existent -> exited
Jun 2 04:37:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:03.541929 28263 generic.go:153] GenericPLEG: f0118c7e-82cb-4825-a01b-3014fe500e1f/8ebaeed71f6ce99191a2d839a07d3573119472da221aeb4c7f646f25e6e9dd1b: non-existent -> exited
Jun 2 04:37:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:03.541932 28263 generic.go:153] GenericPLEG: f0118c7e-82cb-4825-a01b-3014fe500e1f/b04da653f52e0badc54cc839b485dcc7ec5e2f6a8df326d03bcf3e5c8a14a3e3: non-existent -> exited
Jun 2 04:37:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:03.541936 28263 generic.go:153] GenericPLEG: f0118c7e-82cb-4825-a01b-3014fe500e1f/a23912e38613fd455b26061c4ab002da294f18437b21bc1874e65a82ee1fba05: non-existent -> exited
Jun 2 04:37:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:03.541939 28263 generic.go:153] GenericPLEG: f0118c7e-82cb-4825-a01b-3014fe500e1f/7f928360f1ba8890194ed795cfa22c5930c0d3ce5f6f2bc6d0592f4a3c1b579f: non-existent -> exited
Jun 2 04:37:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:03.541943 28263 generic.go:153] GenericPLEG: f0118c7e-82cb-4825-a01b-3014fe500e1f/c3bdab1ed8896399263672ca45365e3d74c4ddc3958f82e3c7549fe12bc6c74b: non-existent -> exited
Jun 2 04:37:05 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:37:05.580912 28263 pod_workers.go:191] Error syncing pod f0118c7e-82cb-4825-a01b-3014fe500e1f ("optimus-pr-b-6bgc3_jenkins(f0118c7e-82cb-4825-a01b-3014fe500e1f)"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:37:05 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:05.580983 28263 event.go:274] Event(v1.ObjectReference{Kind:"Pod", Namespace:"jenkins", Name:"optimus-pr-b-6bgc3", UID:"f0118c7e-82cb-4825-a01b-3014fe500e1f", APIVersion:"v1", ResourceVersion:"4311315533", FieldPath:""}): type: 'Warning' reason: 'FailedSync' error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:37:18 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:37:18.277091 28263 pod_workers.go:191] Error syncing pod f0118c7e-82cb-4825-a01b-3014fe500e1f ("optimus-pr-b-6bgc3_jenkins(f0118c7e-82cb-4825-a01b-3014fe500e1f)"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:37:18 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:18.277187 28263 event.go:274] Event(v1.ObjectReference{Kind:"Pod", Namespace:"jenkins", Name:"optimus-pr-b-6bgc3", UID:"f0118c7e-82cb-4825-a01b-3014fe500e1f", APIVersion:"v1", ResourceVersion:"4311315533", FieldPath:""}): type: 'Warning' reason: 'FailedSync' error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:37:29 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:37:29.276942 28263 pod_workers.go:191] Error syncing pod f0118c7e-82cb-4825-a01b-3014fe500e1f ("optimus-pr-b-6bgc3_jenkins(f0118c7e-82cb-4825-a01b-3014fe500e1f)"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:37:29 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:29.276994 28263 event.go:274] Event(v1.ObjectReference{Kind:"Pod", Namespace:"jenkins", Name:"optimus-pr-b-6bgc3", UID:"f0118c7e-82cb-4825-a01b-3014fe500e1f", APIVersion:"v1", ResourceVersion:"4311315533", FieldPath:""}): type: 'Warning' reason: 'FailedSync' error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:37:44 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:37:44.276919 28263 pod_workers.go:191] Error syncing pod f0118c7e-82cb-4825-a01b-3014fe500e1f ("optimus-pr-b-6bgc3_jenkins(f0118c7e-82cb-4825-a01b-3014fe500e1f)"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:37:44 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:44.276964 28263 event.go:274] Event(v1.ObjectReference{Kind:"Pod", Namespace:"jenkins", Name:"optimus-pr-b-6bgc3", UID:"f0118c7e-82cb-4825-a01b-3014fe500e1f", APIVersion:"v1", ResourceVersion:"4311315533", FieldPath:""}): type: 'Warning' reason: 'FailedSync' error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:37:56 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:37:56.277039 28263 pod_workers.go:191] Error syncing pod f0118c7e-82cb-4825-a01b-3014fe500e1f ("optimus-pr-b-6bgc3_jenkins(f0118c7e-82cb-4825-a01b-3014fe500e1f)"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:37:56 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:37:56.277116 28263 event.go:274] Event(v1.ObjectReference{Kind:"Pod", Namespace:"jenkins", Name:"optimus-pr-b-6bgc3", UID:"f0118c7e-82cb-4825-a01b-3014fe500e1f", APIVersion:"v1", ResourceVersion:"4311315533", FieldPath:""}): type: 'Warning' reason: 'FailedSync' error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:38:08 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:38:08.276838 28263 pod_workers.go:191] Error syncing pod f0118c7e-82cb-4825-a01b-3014fe500e1f ("optimus-pr-b-6bgc3_jenkins(f0118c7e-82cb-4825-a01b-3014fe500e1f)"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:38:08 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:38:08.276913 28263 event.go:274] Event(v1.ObjectReference{Kind:"Pod", Namespace:"jenkins", Name:"optimus-pr-b-6bgc3", UID:"f0118c7e-82cb-4825-a01b-3014fe500e1f", APIVersion:"v1", ResourceVersion:"4311315533", FieldPath:""}): type: 'Warning' reason: 'FailedSync' error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:38:22 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:38:22.277107 28263 pod_workers.go:191] Error syncing pod f0118c7e-82cb-4825-a01b-3014fe500e1f ("optimus-pr-b-6bgc3_jenkins(f0118c7e-82cb-4825-a01b-3014fe500e1f)"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:38:22 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:38:22.277151 28263 event.go:274] Event(v1.ObjectReference{Kind:"Pod", Namespace:"jenkins", Name:"optimus-pr-b-6bgc3", UID:"f0118c7e-82cb-4825-a01b-3014fe500e1f", APIVersion:"v1", ResourceVersion:"4311315533", FieldPath:""}): type: 'Warning' reason: 'FailedSync' error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:38:37 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:38:37.277123 28263 pod_workers.go:191] Error syncing pod f0118c7e-82cb-4825-a01b-3014fe500e1f ("optimus-pr-b-6bgc3_jenkins(f0118c7e-82cb-4825-a01b-3014fe500e1f)"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:38:37 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:38:37.277189 28263 event.go:274] Event(v1.ObjectReference{Kind:"Pod", Namespace:"jenkins", Name:"optimus-pr-b-6bgc3", UID:"f0118c7e-82cb-4825-a01b-3014fe500e1f", APIVersion:"v1", ResourceVersion:"4311315533", FieldPath:""}): type: 'Warning' reason: 'FailedSync' error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:38:51 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:38:51.277059 28263 pod_workers.go:191] Error syncing pod f0118c7e-82cb-4825-a01b-3014fe500e1f ("optimus-pr-b-6bgc3_jenkins(f0118c7e-82cb-4825-a01b-3014fe500e1f)"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:38:51 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:38:51.277101 28263 event.go:274] Event(v1.ObjectReference{Kind:"Pod", Namespace:"jenkins", Name:"optimus-pr-b-6bgc3", UID:"f0118c7e-82cb-4825-a01b-3014fe500e1f", APIVersion:"v1", ResourceVersion:"4311315533", FieldPath:""}): type: 'Warning' reason: 'FailedSync' error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:39:02 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:39:02.276836 28263 pod_workers.go:191] Error syncing pod f0118c7e-82cb-4825-a01b-3014fe500e1f ("optimus-pr-b-6bgc3_jenkins(f0118c7e-82cb-4825-a01b-3014fe500e1f)"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:39:02 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:39:02.276908 28263 event.go:274] Event(v1.ObjectReference{Kind:"Pod", Namespace:"jenkins", Name:"optimus-pr-b-6bgc3", UID:"f0118c7e-82cb-4825-a01b-3014fe500e1f", APIVersion:"v1", ResourceVersion:"4311315533", FieldPath:""}): type: 'Warning' reason: 'FailedSync' error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:39:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:39:03.554207 28263 remote_runtime.go:295] ContainerStatus "b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121" from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:39:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:39:03.554252 28263 kuberuntime_container.go:403] ContainerStatus for b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121 error: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:39:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:39:03.554265 28263 kuberuntime_manager.go:1122] getPodContainerStatuses for pod "optimus-pr-b-6bgc3_jenkins(f0118c7e-82cb-4825-a01b-3014fe500e1f)" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:39:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:39:03.554272 28263 generic.go:397] PLEG: Write status for optimus-pr-b-6bgc3/jenkins: (*container.PodStatus)(nil) (err: rpc error: code = DeadlineExceeded desc = context deadline exceeded)
Jun 2 04:39:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:39:03.554285 28263 generic.go:252] PLEG: Ignoring events for pod optimus-pr-b-6bgc3/jenkins: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:39:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:39:03.554294 28263 generic.go:284] GenericPLEG: Reinspecting pods that previously failed inspection
Jun 2 04:39:17 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:39:17.277086 28263 pod_workers.go:191] Error syncing pod f0118c7e-82cb-4825-a01b-3014fe500e1f ("optimus-pr-b-6bgc3_jenkins(f0118c7e-82cb-4825-a01b-3014fe500e1f)"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:39:17 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:39:17.277137 28263 event.go:274] Event(v1.ObjectReference{Kind:"Pod", Namespace:"jenkins", Name:"optimus-pr-b-6bgc3", UID:"f0118c7e-82cb-4825-a01b-3014fe500e1f", APIVersion:"v1", ResourceVersion:"4311315533", FieldPath:""}): type: 'Warning' reason: 'FailedSync' error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:39:28 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:39:28.276905 28263 pod_workers.go:191] Error syncing pod f0118c7e-82cb-4825-a01b-3014fe500e1f ("optimus-pr-b-6bgc3_jenkins(f0118c7e-82cb-4825-a01b-3014fe500e1f)"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:39:28 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:39:28.276976 28263 event.go:274] Event(v1.ObjectReference{Kind:"Pod", Namespace:"jenkins", Name:"optimus-pr-b-6bgc3", UID:"f0118c7e-82cb-4825-a01b-3014fe500e1f", APIVersion:"v1", ResourceVersion:"4311315533", FieldPath:""}): type: 'Warning' reason: 'FailedSync' error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:39:40 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:39:40.276815 28263 pod_workers.go:191] Error syncing pod f0118c7e-82cb-4825-a01b-3014fe500e1f ("optimus-pr-b-6bgc3_jenkins(f0118c7e-82cb-4825-a01b-3014fe500e1f)"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:39:40 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:39:40.276858 28263 event.go:274] Event(v1.ObjectReference{Kind:"Pod", Namespace:"jenkins", Name:"optimus-pr-b-6bgc3", UID:"f0118c7e-82cb-4825-a01b-3014fe500e1f", APIVersion:"v1", ResourceVersion:"4311315533", FieldPath:""}): type: 'Warning' reason: 'FailedSync' error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:39:51 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:39:51.276950 28263 pod_workers.go:191] Error syncing pod f0118c7e-82cb-4825-a01b-3014fe500e1f ("optimus-pr-b-6bgc3_jenkins(f0118c7e-82cb-4825-a01b-3014fe500e1f)"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:39:51 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:39:51.277015 28263 event.go:274] Event(v1.ObjectReference{Kind:"Pod", Namespace:"jenkins", Name:"optimus-pr-b-6bgc3", UID:"f0118c7e-82cb-4825-a01b-3014fe500e1f", APIVersion:"v1", ResourceVersion:"4311315533", FieldPath:""}): type: 'Warning' reason: 'FailedSync' error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:40:04 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:40:04.276869 28263 pod_workers.go:191] Error syncing pod f0118c7e-82cb-4825-a01b-3014fe500e1f ("optimus-pr-b-6bgc3_jenkins(f0118c7e-82cb-4825-a01b-3014fe500e1f)"), skipping: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:40:04 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:40:04.276939 28263 event.go:274] Event(v1.ObjectReference{Kind:"Pod", Namespace:"jenkins", Name:"optimus-pr-b-6bgc3", UID:"f0118c7e-82cb-4825-a01b-3014fe500e1f", APIVersion:"v1", ResourceVersion:"4311315533", FieldPath:""}): type: 'Warning' reason: 'FailedSync' error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:41:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:41:03.566494 28263 remote_runtime.go:295] ContainerStatus "b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121" from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:41:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:41:03.566543 28263 kuberuntime_container.go:403] ContainerStatus for b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121 error: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:41:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: E0602 04:41:03.566554 28263 kuberuntime_manager.go:1122] getPodContainerStatuses for pod "optimus-pr-b-6bgc3_jenkins(f0118c7e-82cb-4825-a01b-3014fe500e1f)" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:41:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:41:03.566561 28263 generic.go:397] PLEG: Write status for optimus-pr-b-6bgc3/jenkins: (*container.PodStatus)(nil) (err: rpc error: code = DeadlineExceeded desc = context deadline exceeded)
Jun 2 04:41:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:41:03.566575 28263 generic.go:288] PLEG: pod optimus-pr-b-6bgc3/jenkins failed reinspection: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jun 2 04:41:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 kubelet[28263]: I0602 04:41:03.566604 28263 generic.go:189] GenericPLEG: Relisting
çŸåšã®PLEGã®æ£åžžãªãããå€ã¯3åã§ãããããPLEGã®åãªã¹ãã3åãè¶ ããå Žåãããã¯ãã®å Žåã¯ããªãç°¡åã§ãããPLEGã¯ç°åžžãšããŠå ±åãããŸãã
åã«docker rm
ããã®ãããªç¶æ
ãä¿®æ£ãããã©ããã確èªããæ©äŒããããŸãããããšãã°ãçŽ40åéã¹ã¿ãã¯ããåŸãdockerã¯ããèªäœã®ãããã¯ã解é€ããŸãã
[root@node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69:/home/hzhang]# journalctl -u docker | grep b7ae92902520
Jun 01 23:39:03 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 dockerd[1731]: time="2020-06-01T23:39:03Z" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121/shim.sock" debug=false pid=11377
Jun 02 03:23:06 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 dockerd[1731]: time="2020-06-02T03:23:06Z" level=info msg="shim reaped" id=b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121
Jun 02 03:23:36 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 dockerd[1731]: time="2020-06-02T03:23:36.433087181Z" level=info msg="Container b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121 failed to exit within 30 seconds of signal 15 - using the force"
Jun 02 04:41:45 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 dockerd[1731]: time="2020-06-02T04:41:45.435460391Z" level=warning msg="Container b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121 is not running"
Jun 02 04:41:45 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 dockerd[1731]: time="2020-06-02T04:41:45.435684282Z" level=error msg="Handler for GET /containers/b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Jun 02 04:41:45 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 dockerd[1731]: time="2020-06-02T04:41:45.435955786Z" level=error msg="Handler for GET /containers/b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Jun 02 04:41:45 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 dockerd[1731]: time="2020-06-02T04:41:45.436078347Z" level=error msg="Handler for GET /containers/b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Jun 02 04:41:45 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 dockerd[1731]: time="2020-06-02T04:41:45.436341875Z" level=error msg="Handler for GET /containers/b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Jun 02 04:41:45 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 dockerd[1731]: time="2020-06-02T04:41:45.436570634Z" level=error msg="Handler for GET /containers/b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Jun 02 04:41:45 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 dockerd[1731]: time="2020-06-02T04:41:45.436770587Z" level=error msg="Handler for GET /containers/b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
Jun 02 04:41:45 node-k8s-use1-prod-shared-001-kubecluster-3-0a0c5d69 dockerd[1731]: time="2020-06-02T04:41:45.436905470Z" level=error msg="Handler for GET /containers/b7ae929025205a7ea9eeaec24bc0526bf642052edff6c7849bc5cc7b9afb9121/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
......
åæ§ã®çŸè±¡ã«é¢ããŠãããŸããŸãªåé¡ãçºçããŠããŸãã
https://github.com/docker/for-linux/issues/397
https://github.com/docker/for-linux/issues/543
https://github.com/moby/moby/issues/41054
ãã ããdocker 19.03
ãã€ãŸãhttps://github.com/docker/for-linux/issues/397#issuecomment-515425324ã§åŒãç¶ã衚瀺ããããšäž»åŒµãããŠããŸã
ãããã修埩ã¯ããŠã©ããããã°ã䜿çšããŠdocker ps
ãšps ax
ãæ¯èŒããã·ã ããã»ã¹ã®ãªãã³ã³ãããŒãã¹ã¯ã©ããããããã匷å¶çµäºããŠãããã®ãããã®ãããã¯ã解é€ãããã docker rm
䜿çšããããšã§ããã³ã³ãããåé€ããã«ã¯
äžèšã®èª¿æ»ãç¶è¡ããã«ã¯ãã¹ã¬ãããã³ãã«ãããDockerããã³ã°ããŠããéãDockerãã³ã³ãããŒåãããç¶æ ã§åŸ æ©ããŠãããããã³ã³ãããŒåãããåé¡ãçºçããŠããå¯èœæ§ããããŸãã ïŒä»¥äžã®ã¹ã¬ãããã³ããåç §ïŒãã®å Žå
ã€ãŸããæ¬çªç°å¢ã§è¡ã£ãããšã¯æ¬¡ã®ãšããã§ãã
ps
ãšdocker ps
éã®äžæŽåã確èªãã圱é¿ãåããã³ã³ãããéžæããŸãããã®å Žåãæäœãã¹ã¿ãã¯ããŠãããã¹ãŠã®ã³ã³ããã¯ããã§ã«ã·ã ãåãåã£ãŠããŸãã/ cc @ jmf0526 @haosdent @liucimin @yujuhong @thockin
åŸã®ã¹ã¬ããã§èª¿æ»ã®ããã«ç©æ¥µçã«è©±ããŠããããã§ã
goroutine 1707386 [select, 22 minutes]:
--
github.com/docker/docker/vendor/google.golang.org/grpc/transport.(*Stream).waitOnHeader(0xc420609680, 0x10, 0xc420f60fd8)
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/transport/transport.go:222 +0x101
github.com/docker/docker/vendor/google.golang.org/grpc/transport.(*Stream).RecvCompress(0xc420609680, 0x555ab63e0730, 0xc420f61098)
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/transport/transport.go:233 +0x2d
github.com/docker/docker/vendor/google.golang.org/grpc.(*csAttempt).recvMsg(0xc4267ef1e0, 0x555ab624f000, 0xc4288fd410, 0x0, 0x0)
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/stream.go:515 +0x63b
github.com/docker/docker/vendor/google.golang.org/grpc.(*clientStream).RecvMsg(0xc4204fa800, 0x555ab624f000, 0xc4288fd410, 0x0, 0x0)
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/stream.go:395 +0x45
github.com/docker/docker/vendor/google.golang.org/grpc.invoke(0x555ab6415260, 0xc4288fd4a0, 0x555ab581d40c, 0x2a, 0x555ab6249c00, 0xc428c04450, 0x555ab624f000, 0xc4288fd410, 0xc4202d4600, 0xc4202cdc40, ...)
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/call.go:83 +0x185
github.com/docker/docker/vendor/github.com/containerd/containerd.namespaceInterceptor.unary(0x555ab57c9d91, 0x4, 0x555ab64151e0, 0xc420128040, 0x555ab581d40c, 0x2a, 0x555ab6249c00, 0xc428c04450, 0x555ab624f000, 0xc4288fd410, ...)
/go/src/github.com/docker/docker/vendor/github.com/containerd/containerd/grpc.go:35 +0xf6
github.com/docker/docker/vendor/github.com/containerd/containerd.(namespaceInterceptor).(github.com/docker/docker/vendor/github.com/containerd/containerd.unary)-fm(0x555ab64151e0, 0xc420128040, 0x555ab581d40c, 0x2a, 0x555ab6249c00, 0xc428c04450, 0x555ab624f000, 0xc4288fd410, 0xc4202d4600, 0x555ab63e07a0, ...)
/go/src/github.com/docker/docker/vendor/github.com/containerd/containerd/grpc.go:51 +0xf6
github.com/docker/docker/vendor/google.golang.org/grpc.(*ClientConn).Invoke(0xc4202d4600, 0x555ab64151e0, 0xc420128040, 0x555ab581d40c, 0x2a, 0x555ab6249c00, 0xc428c04450, 0x555ab624f000, 0xc4288fd410, 0x0, ...)
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/call.go:35 +0x10b
github.com/docker/docker/vendor/google.golang.org/grpc.Invoke(0x555ab64151e0, 0xc420128040, 0x555ab581d40c, 0x2a, 0x555ab6249c00, 0xc428c04450, 0x555ab624f000, 0xc4288fd410, 0xc4202d4600, 0x0, ...)
/go/src/github.com/docker/docker/vendor/google.golang.org/grpc/call.go:60 +0xc3
github.com/docker/docker/vendor/github.com/containerd/containerd/api/services/tasks/v1.(*tasksClient).Delete(0xc422c96128, 0x555ab64151e0, 0xc420128040, 0xc428c04450, 0x0, 0x0, 0x0, 0xed66bcd50, 0x0, 0x0)
/go/src/github.com/docker/docker/vendor/github.com/containerd/containerd/api/services/tasks/v1/tasks.pb.go:430 +0xd4
github.com/docker/docker/vendor/github.com/containerd/containerd.(*task).Delete(0xc42463e8d0, 0x555ab64151e0, 0xc420128040, 0x0, 0x0, 0x0, 0xc42463e8d0, 0x0, 0x0)
/go/src/github.com/docker/docker/vendor/github.com/containerd/containerd/task.go:292 +0x24a
github.com/docker/docker/libcontainerd.(*client).DeleteTask(0xc4203d4e00, 0x555ab64151e0, 0xc420128040, 0xc421763740, 0x40, 0x0, 0x20, 0x20, 0x555ab5fc6920, 0x555ab4269945, ...)
/go/src/github.com/docker/docker/libcontainerd/client_daemon.go:504 +0xe2
github.com/docker/docker/daemon.(*Daemon).ProcessEvent(0xc4202c61c0, 0xc4216469c0, 0x40, 0x555ab57c9b55, 0x4, 0xc4216469c0, 0x40, 0xc421646a80, 0x40, 0x8f0000069c, ...)
/go/src/github.com/docker/docker/daemon/monitor.go:54 +0x23c
github.com/docker/docker/libcontainerd.(*client).processEvent.func1()
/go/src/github.com/docker/docker/libcontainerd/client_daemon.go:694 +0x130
github.com/docker/docker/libcontainerd.(*queue).append.func1(0xc421646900, 0x0, 0xc42a24e380, 0xc420300420, 0xc4203d4e58, 0xc4216469c0, 0x40)
/go/src/github.com/docker/docker/libcontainerd/queue.go:26 +0x3a
created by github.com/docker/docker/libcontainerd.(*queue).append
/go/src/github.com/docker/docker/libcontainerd/queue.go:22 +0x196
éåžžã«ãã䌌ãåé¡ãçºçããŠããŸãïŒããšãã°ãdocker psã¯æ©èœããŸãããdocker inspectãã¹ã¿ãã¯ããŸãïŒã FedoraCoreOSã§docker19.3.8ã䜿çšããŠkubernetesv1.17.6ãå®è¡ããŠããŸãã
ãŸããdockerpsã«ãã£ãŠãªã¹ããããã³ã³ãããŒãdockerinspectã«ãã³ã°ã¢ãããããšãããã®åé¡ãçºçããŸããã
docker ps -a | tr -s " " | cut -d " " -f1 | xargs -Iarg sh -c 'echo arg; docker inspect arg> /dev/null'
ç§ãã¡ã®å Žåã圱é¿ãåããã³ã³ãããrunc init
ã¹ã¿ãã¯ããŠããããšã«æ°ã¥ããŸããã runc init
ã®ã¡ã€ã³ã¹ã¬ãããã¢ã¿ãããŸãã¯ãã¬ãŒã¹ããã®ã«åé¡ããããŸããã ä¿¡å·ãé
ä¿¡ãããŠããªãããã§ããã ç§ãã¡ãç¥ãéããããã»ã¹ã¯ã«ãŒãã«ã§ã¹ã¿ãã¯ããŠããŠããŠãŒã¶ãŒã¹ããŒã¹ã«æ»ãé·ç§»ãè¡ã£ãŠããŸããã ç§ã¯å®éã«ã¯Linuxã«ãŒãã«ã®ãããã°ã®å°é家ã§ã¯ãããŸããããç§ãç¥ãéããããã¯ããŠã³ãã®ã¯ãªãŒã³ã¢ããã«é¢é£ããã«ãŒãã«ã®åé¡ã®ããã§ãã ããã¯ã runc init
ããã»ã¹ãã«ãŒãã«ã©ã³ãã§å®è¡ããŠããããšã®ã¹ã¿ãã¯ãã¬ãŒã¹ã®äŸã§ãã
[<0>] kmem_cache_alloc+0x162/0x1c0
[<0>] kmem_zone_alloc+0x61/0xe0 [xfs]
[<0>] xfs_buf_item_init+0x31/0x160 [xfs]
[<0>] _xfs_trans_bjoin+0x1e/0x50 [xfs]
[<0>] xfs_trans_read_buf_map+0x104/0x340 [xfs]
[<0>] xfs_imap_to_bp+0x67/0xd0 [xfs]
[<0>] xfs_iunlink_remove+0x16b/0x430 [xfs]
[<0>] xfs_ifree+0x42/0x140 [xfs]
[<0>] xfs_inactive_ifree+0x9e/0x1c0 [xfs]
[<0>] xfs_inactive+0x9e/0x140 [xfs]
[<0>] xfs_fs_destroy_inode+0xa8/0x1c0 [xfs]
[<0>] __dentry_kill+0xd5/0x170
[<0>] dentry_kill+0x4d/0x190
[<0>] dput.part.31+0xcb/0x110
[<0>] ovl_destroy_inode+0x15/0x60 [overlay]
[<0>] __dentry_kill+0xd5/0x170
[<0>] shrink_dentry_list+0x94/0x1b0
[<0>] shrink_dcache_parent+0x88/0x90
[<0>] do_one_tree+0xe/0x40
[<0>] shrink_dcache_for_umount+0x28/0x80
[<0>] generic_shutdown_super+0x1a/0x100
[<0>] kill_anon_super+0x14/0x30
[<0>] deactivate_locked_super+0x34/0x70
[<0>] cleanup_mnt+0x3b/0x70
[<0>] task_work_run+0x8a/0xb0
[<0>] exit_to_usermode_loop+0xeb/0xf0
[<0>] do_syscall_64+0x182/0x1b0
[<0>] entry_SYSCALL_64_after_hwframe+0x65/0xca
[<0>] 0xffffffffffffffff
Dockerãåèµ·åãããšãã³ã³ãããDockerããåé€ãããPLEGã®ç°åžžãªåé¡ã解決ããã®ã«ååã§ãããã¹ã¿ãã¯ããrunc init
ã¯åé€ãããªãããšã«ã泚æããŠãã ããã
ç·šéïŒèå³ã®ãã人ã®ããã®ããŒãžã§ã³ïŒ
Docker 19.03.8
runc 1.0.0-rc10
LinuxïŒ4.18.0-147.el8.x86_64
CentOSïŒ8.1.1911
ãã®åé¡ã¯è§£æ±ºãããŸãããïŒ
ã¯ã©ã¹ã¿ãŒã§PLEGã®åé¡ãçºçãããã®æªè§£æ±ºã®åé¡ã確èªããŸããã
ããã«å¯Ÿããåé¿çã¯ãããŸããïŒ
æ°æ¥é皌åããŠããã¯ã©ã¹ã¿ãŒã§ããPLEGã®åé¡ãçºçããŸããã
ã»ããã¢ãã
K8Sv1.15.11-eks-af3cafã䜿çšããEKSã¯ã©ã¹ã¿ãŒ
DockerããŒãžã§ã³18.09.9-ce
ã€ã³ã¹ã¿ã³ã¹ã¿ã€ãã¯m5ad.4xlargeã§ã
åé¡
Jul 08 04:12:36 ip-56-0-1-191.us-west-2.compute.internal kubelet [5354]ïŒI0708 04ïŒ12ïŒ36.051162 5354 setters.goïŒ533]ããŒãã®æºåãã§ããŠããŸããïŒ{ã¿ã€ãïŒæºåå®äºã¹ããŒã¿ã¹ïŒ FalseLastHear tbeatTimeïŒ2020-07-08 04ïŒ12ïŒ36.051127368 +0000 UTC m = + 4279967.056220983 LastTrans itionTimeïŒ2020-07-08 04ïŒ12ïŒ36.051127368 +0000 UTC m = + 4279967.056220983çç±ïŒKubeletNotReadyã¡ãã»ãŒãžïŒPLEGã¯å¥åº·çã§ã¯ãããŸãã
å埩
Kubeletã®åèµ·åã«ããããŒããå埩ããŸããã
解決çã¯ãããŸããïŒ DockerããŒãžã§ã³ã®ã¢ããã°ã¬ãŒãã¯æ©èœããŸããïŒ
å€åããã¯dockerã³ã³ããã®åé¡ã§ããäŸãã°ã ã³ã³ããå ã®ãŸã³ãããã»ã¹ãå€ããšããdocker ps / inspectããéåžžã«é ããªããŸã
ãã¹ãŠã®ã¯ãŒã«ãŒã®systemctl restart docker
ã§åé¡ãä¿®æ£ãããŸããã
@jetersen Dockerã§ãlive-restoreããæå¹ã«ããŠããŸããïŒ
ããã©ã«ãã§ã¯ãDockerãåèµ·åãããšããã¹ãŠã®ã³ã³ãããŒãåèµ·åãããŸããããã¯ãåé¡ã解決ããã®ã«ããªã倧ããªãã³ããŒã§ãã
@bborehamã¯ã¯ã©ã¹ã¿ãŒãç Žå£ããŠåäœæããã»ã©å€§ããã¯ãããŸããð
ãã®åé¡ã¯ãKubernetes 1.15.3ã1.16.3ãããã³1.17.9ã䜿çšããŠçºçããŠããŸãã dockerããŒãžã§ã³18.6.3ïŒContainer LinuxïŒããã³19.3.12ïŒFlatcar LinuxïŒã®å Žåã
åããŒãã«ã¯çŽ50åã®ãããããããŸãã
ãŸããdockerpsã«ãã£ãŠãªã¹ããããã³ã³ãããŒãdockerinspectã«ãã³ã°ã¢ãããããšãããã®åé¡ãçºçããŸããã
docker ps -a | tr -s " " | cut -d " " -f1 | xargs -Iarg sh -c 'echo arg; docker inspect arg> /dev/null'
ç§ãã¡ã®å Žåã圱é¿ãåããã³ã³ããã
runc init
ã¹ã¿ãã¯ããŠããããšã«æ°ã¥ããŸãããrunc init
ã®ã¡ã€ã³ã¹ã¬ãããã¢ã¿ãããŸãã¯ãã¬ãŒã¹ããã®ã«åé¡ããããŸããã ä¿¡å·ãé ä¿¡ãããŠããªãããã§ããã ç§ãã¡ãç¥ãéããããã»ã¹ã¯ã«ãŒãã«ã§ã¹ã¿ãã¯ããŠããŠããŠãŒã¶ãŒã¹ããŒã¹ã«æ»ãé·ç§»ãè¡ã£ãŠããŸããã ç§ã¯å®éã«ã¯Linuxã«ãŒãã«ã®ãããã°ã®å°é家ã§ã¯ãããŸããããç§ãç¥ãéããããã¯ããŠã³ãã®ã¯ãªãŒã³ã¢ããã«é¢é£ããã«ãŒãã«ã®åé¡ã®ããã§ãã ããã¯ãrunc init
ããã»ã¹ãã«ãŒãã«ã©ã³ãã§å®è¡ããŠããããšã®ã¹ã¿ãã¯ãã¬ãŒã¹ã®äŸã§ãã[<0>] kmem_cache_alloc+0x162/0x1c0 [<0>] kmem_zone_alloc+0x61/0xe0 [xfs] [<0>] xfs_buf_item_init+0x31/0x160 [xfs] [<0>] _xfs_trans_bjoin+0x1e/0x50 [xfs] [<0>] xfs_trans_read_buf_map+0x104/0x340 [xfs] [<0>] xfs_imap_to_bp+0x67/0xd0 [xfs] [<0>] xfs_iunlink_remove+0x16b/0x430 [xfs] [<0>] xfs_ifree+0x42/0x140 [xfs] [<0>] xfs_inactive_ifree+0x9e/0x1c0 [xfs] [<0>] xfs_inactive+0x9e/0x140 [xfs] [<0>] xfs_fs_destroy_inode+0xa8/0x1c0 [xfs] [<0>] __dentry_kill+0xd5/0x170 [<0>] dentry_kill+0x4d/0x190 [<0>] dput.part.31+0xcb/0x110 [<0>] ovl_destroy_inode+0x15/0x60 [overlay] [<0>] __dentry_kill+0xd5/0x170 [<0>] shrink_dentry_list+0x94/0x1b0 [<0>] shrink_dcache_parent+0x88/0x90 [<0>] do_one_tree+0xe/0x40 [<0>] shrink_dcache_for_umount+0x28/0x80 [<0>] generic_shutdown_super+0x1a/0x100 [<0>] kill_anon_super+0x14/0x30 [<0>] deactivate_locked_super+0x34/0x70 [<0>] cleanup_mnt+0x3b/0x70 [<0>] task_work_run+0x8a/0xb0 [<0>] exit_to_usermode_loop+0xeb/0xf0 [<0>] do_syscall_64+0x182/0x1b0 [<0>] entry_SYSCALL_64_after_hwframe+0x65/0xca [<0>] 0xffffffffffffffff
Dockerãåèµ·åãããšãã³ã³ãããDockerããåé€ãããPLEGã®ç°åžžãªåé¡ã解決ããã®ã«ååã§ãããã¹ã¿ãã¯ãã
runc init
ã¯åé€ãããªãããšã«ã泚æããŠãã ãããç·šéïŒèå³ã®ãã人ã®ããã®ããŒãžã§ã³ïŒ
Docker 19.03.8
runc 1.0.0-rc10
LinuxïŒ4.18.0-147.el8.x86_64
CentOSïŒ8.1.1911
ãã®åé¡ã¯è§£æ±ºãããŸãããïŒ ã©ã®ããŒãžã§ã³ã§ïŒ
ããŒã¯
kubernetes version = v1.16.8-eks-e16311ããã³ïŒ//19.3.6ã®eksã§åã³åé¡ã«çŽé¢ã
dockerãškubeletãåèµ·åãããšãããŒããå埩ããŸããã
@ mak-454ä»æ¥ãEKSã§ãã®åé¡ãçºçããŸãããåé¡ã®æéãšãšãã«ãããŒããå®è¡ãããŠãããªãŒãžã§ã³/ AZãå ±æããŠããã ããŸãããã æ ¹æ¬çãªã€ã³ãã©ã®åé¡ããã£ãã®ã§ã¯ãªãããšç¥ãããã§ãã
@JacobHennerç§ã®ããŒãã¯eu-central-1ãªãŒãžã§ã³ã§å®è¡ãããŠããŸããã
KubernetesããŒãžã§ã³ã1.15.12ãããã³dockerããŒãžã§ã³ã19.03.6-ceãã䜿çšããEKSïŒca-central-1ïŒã§ãã®åé¡ãçºçããŸãã
docker / kubeletãåèµ·åãããšãããŒãã€ãã³ãã«æ¬¡ã®è¡ã衚瀺ãããŸãã
Warning SystemOOM 14s (x3 over 14s) kubelet, ip-10-1-2-3.ca-central-1.compute.internal System OOM encountered
æãåèã«ãªãã³ã¡ã³ã
PLEGãã«ã¹ãã§ãã¯ã¯ã»ãšãã©è¡ããŸããã ãã¹ãŠã®å埩ã§ã
docker ps
ãåŒã³åºããŠã³ã³ãããŒã®ç¶æ ã®å€åãæ€åºããdocker ps
ãšinspect
ãåŒã³åºããŠãããã®ã³ã³ãããŒã®è©³çŽ°ãååŸããŸããåå埩ãçµäºãããšãã¿ã€ã ã¹ã¿ã³ããæŽæ°ãããŸãã ã¿ã€ã ã¹ã¿ã³ãããã°ããïŒã€ãŸã3åéïŒæŽæ°ãããŠããªãå Žåããã«ã¹ãã§ãã¯ã¯å€±æããŸãã
PLEGã3åã§ããããã¹ãŠãå®äºã§ããªãèšå€§ãªæ°ã®ããããããŒãã«ããŒããããŠããªãéãïŒããã¯çºçããªãã¯ãã§ãïŒãæãå¯èœæ§ã®é«ãåå ã¯Dockerãé ãããšã§ãã ããŸã«
docker ps
å°åæã§ããã芳å¯ã§ããªããããããŸããããããã¯ããããªããšããæå³ã§ã¯ãããŸããããäžå¥åº·ãã¹ããŒã¿ã¹ãå ¬éããªããšããŠãŒã¶ãŒããå€ãã®åé¡ãé ãããããã«å€ãã®åé¡ãçºçããå¯èœæ§ããããŸãã ããšãã°ãkubeletã¯å€æŽã«ã¿ã€ã ãªãŒã«åå¿ãããããã«æ··ä¹±ãæããŸãã
ããããããããã°å¯èœã«ããæ¹æ³ã«é¢ããææ¡ãæè¿ããŸã...