ãã®ãã©ãŒã ã¯ããã°ã¬ããŒããšæ©èœãªã¯ãšã¹ãå°çšã§ãã ãã«ããå¿ èŠãªå Žåã¯ã[Stack Overflow]ïŒhttps://stackoverflow.com/questions/tagged/kubernetesïŒãš[ãã©ãã«ã·ã¥ãŒãã£ã³ã°ã¬ã€ã]ïŒhttps://kubernetes.io/docs/tasks/debug-application-ïŒã確èªããŠãã ãããã¯ã©ã¹ã¿ãŒ/ãã©ãã«ã·ã¥ãŒãã£ã³ã°/ïŒã
ããã¯ãã°ã¬ããŒãã§ããããããšãæ©èœãªã¯ãšã¹ãã§ããïŒ ïŒ
/çš®é¡ã®ãã°
äœãèµ·ãã£ãã®ãïŒ
ããããé·æéçµäºãç¶ãã
ããªããèµ·ãããšæåŸ
ããããšïŒ
ããããçµäºããŸã
ãããåçŸããæ¹æ³ïŒå¯èœãªéãæå°éãã€æ£ç¢ºã«ïŒ ïŒ
ä»ã«ç¥ã£ãŠããã¹ãããšã¯ãããŸããïŒ ïŒ
Kubernetesãããã¯ãåé€ãããŠããæ°æéã Terminating
ãšããŠã¹ã¿ãã¯ããŸããã
ãã°ïŒ
kubectl describe pod my-pod-3854038851-r1hc3
Name: my-pod-3854038851-r1hc3
Namespace: container-4-production
Node: ip-172-16-30-204.ec2.internal/172.16.30.204
Start Time: Fri, 01 Sep 2017 11:58:24 -0300
Labels: pod-template-hash=3854038851
release=stable
run=my-pod-3
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"container-4-production","name":"my-pod-3-3854038851","uid":"5816c...
prometheus.io/scrape=true
Status: Terminating (expires Fri, 01 Sep 2017 14:17:53 -0300)
Termination Grace Period: 30s
IP:
Created By: ReplicaSet/my-pod-3-3854038851
Controlled By: ReplicaSet/my-pod-3-3854038851
Init Containers:
ensure-network:
Container ID: docker://guid-1
Image: XXXXX
Image ID: docker-pullable://repo/ensure-network<strong i="27">@sha256</strong>:guid-0
Port: <none>
State: Terminated
Exit Code: 0
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Mon, 01 Jan 0001 00:00:00 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xxxxx (ro)
Containers:
container-1:
Container ID: docker://container-id-guid-1
Image: XXXXX
Image ID: docker-pullable://repo/container-1<strong i="28">@sha256</strong>:guid-2
Port: <none>
State: Terminated
Exit Code: 0
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Mon, 01 Jan 0001 00:00:00 +0000
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 1G
Requests:
cpu: 100m
memory: 1G
Environment:
XXXX
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xxxxx (ro)
container-2:
Container ID: docker://container-id-guid-2
Image: alpine:3.4
Image ID: docker-pullable://alpine<strong i="29">@sha256</strong>:alpine-container-id-1
Port: <none>
Command:
X
State: Terminated
Exit Code: 0
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Mon, 01 Jan 0001 00:00:00 +0000
Ready: False
Restart Count: 0
Limits:
cpu: 20m
memory: 40M
Requests:
cpu: 10m
memory: 20M
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xxxxx (ro)
container-3:
Container ID: docker://container-id-guid-3
Image: XXXXX
Image ID: docker-pullable://repo/container-3<strong i="30">@sha256</strong>:guid-3
Port: <none>
State: Terminated
Exit Code: 0
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Mon, 01 Jan 0001 00:00:00 +0000
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 200M
Requests:
cpu: 100m
memory: 100M
Readiness: exec [nc -zv localhost 80] delay=1s timeout=1s period=5s #success=1 #failure=3
Environment:
XXXX
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xxxxx (ro)
container-4:
Container ID: docker://container-id-guid-4
Image: XXXX
Image ID: docker-pullable://repo/container-4<strong i="31">@sha256</strong>:guid-4
Port: 9102/TCP
State: Terminated
Exit Code: 0
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Mon, 01 Jan 0001 00:00:00 +0000
Ready: False
Restart Count: 0
Limits:
cpu: 600m
memory: 1500M
Requests:
cpu: 600m
memory: 1500M
Readiness: http-get http://:8080/healthy delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
XXXX
Mounts:
/app/config/external from volume-2 (ro)
/data/volume-1 from volume-1 (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xxxxx (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
volume-1:
Type: Secret (a volume populated by a Secret)
SecretName: volume-1
Optional: false
volume-2:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: external
Optional: false
default-token-xxxxx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-xxxxx
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
sudo journalctl -u kubelet | grepãmy-podã
[...]
Sep 01 17:17:56 ip-172-16-30-204 kubelet[9619]: time="2017-09-01T17:17:56Z" level=info msg="Releasing address using workloadID" Workload=my-pod-3854038851-r1hc3
Sep 01 17:17:56 ip-172-16-30-204 kubelet[9619]: time="2017-09-01T17:17:56Z" level=info msg="Releasing all IPs with handle 'my-pod-3854038851-r1hc3'"
Sep 01 17:17:56 ip-172-16-30-204 kubelet[9619]: time="2017-09-01T17:17:56Z" level=warning msg="Asked to release address but it doesn't exist. Ignoring" Workload=my-pod-3854038851-r1hc3 workloadId=my-pod-3854038851-r1hc3
Sep 01 17:17:56 ip-172-16-30-204 kubelet[9619]: time="2017-09-01T17:17:56Z" level=info msg="Teardown processing complete." Workload=my-pod-3854038851-r1hc3 endpoint=<nil>
Sep 01 17:19:06 ip-172-16-30-204 kubelet[9619]: I0901 17:19:06.591946 9619 kubelet.go:1824] SyncLoop (DELETE, "api"):my-pod-3854038851(b8cf2ecd-8f25-11e7-ba86-0a27a44c875)"
sudo journalctl -u docker | grepãdocker-id-for-my-podã
Sep 01 17:17:55 ip-172-16-30-204 dockerd[9385]: time="2017-09-01T17:17:55.695834447Z" level=error msg="Handler for POST /v1.24/containers/docker-id-for-my-pod/stop returned error: Container docker-id-for-my-pod is already stopped"
Sep 01 17:17:56 ip-172-16-30-204 dockerd[9385]: time="2017-09-01T17:17:56.698913805Z" level=error msg="Handler for POST /v1.24/containers/docker-id-for-my-pod/stop returned error: Container docker-id-for-my-pod is already stopped"
ç°å¢ïŒ
kubectl version
ïŒïŒã¯ã©ãŠããããã€ããŒãŸãã¯ããŒããŠã§ã¢æ§æ**ïŒ
AWS
OSïŒäŸïŒ/ etc / os-releaseããïŒïŒ
NAME = "CentOS Linux"
VERSION = "7ïŒã³ã¢ïŒ"
ID = "centos"
ID_LIKE = "rhel fedora"
VERSION_ID = "7"
PRETTY_NAME = "CentOS Linux 7ïŒã³ã¢ïŒ"
ANSI_COLOR = "0; 31"
CPE_NAME = "cpeïŒ/ oïŒ centosïŒcentos ïŒ7"
HOME_URL = " https://www.centos.org/ "
BUG_REPORT_URL = " https://bugs.centos.org/ "
CENTOS_MANTISBT_PROJECT = "CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION = "7"
REDHAT_SUPPORT_PRODUCT = "centos"
REDHAT_SUPPORT_PRODUCT_VERSION = "7"
ã«ãŒãã«ïŒäŸïŒ uname -a
ïŒïŒ
Linux ip-172-16-30-204 3.10.0-327.10.1.el7.x86_64ïŒ1 SMP Tue Feb 16 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU / Linux
ããŒã«ã®ã€ã³ã¹ããŒã«ïŒ
ã³ããã¹
ãã®ä»ïŒ
DockerããŒãžã§ã³1.12.6ããã«ã78d1802
@ kubernetes / sig-aws @ kubernetes / sig-scheduling
@ kubernetes / sig-aws @ kubernetes / sig-scheduling
éåžžãããªã¥ãŒã ãšãããã¯ãŒã¯ã®ã¯ãªãŒã³ã¢ããã¯ãçµäºæã«å€ãã®æéãæ¶è²»ããŸãã ããããã¹ã¿ãã¯ããŠãããã§ãŒãºãèŠã€ããããšãã§ããŸããïŒ ããšãã°ãããªã¥ãŒã ã®ã¯ãªãŒã³ã¢ããïŒ
éåžžãããªã¥ãŒã ãšãããã¯ãŒã¯ã®ã¯ãªãŒã³ã¢ããã¯ãçµäºæã«å€ãã®æéãæ¶è²»ããŸãã
æ£ããã 圌ãã¯åžžã«çãããã§ãã
@igorleao kubectl delete pod xxx --now
ãè©Šãããšãã§ããŸãã
ããã«ã¡ã¯@resouerãš@dixudx
ããåãããŸããã åãåé¡ã®ããå¥ã®ãããã®kubeletãã°ãèŠããšã次ã®ããšãããããŸããã
Sep 02 15:31:57 ip-172-16-30-208 kubelet[9620]: time="2017-09-02T15:31:57Z" level=info msg="Releasing address using workloadID" Workload=my-pod-969733955-rbxhn
Sep 02 15:31:57 ip-172-16-30-208 kubelet[9620]: time="2017-09-02T15:31:57Z" level=info msg="Releasing all IPs with handle 'my-pod-969733955-rbxhn'"
Sep 02 15:31:57 ip-172-16-30-208 kubelet[9620]: time="2017-09-02T15:31:57Z" level=warning msg="Asked to release address but it doesn't exist. Ignoring" Workload=my-pod-969733955-rbxhn workloadId=my-pod-969733955-rbxhn
Sep 02 15:31:57 ip-172-16-30-208 kubelet[9620]: time="2017-09-02T15:31:57Z" level=info msg="Teardown processing complete." Workload=my-pod-969733955-rbxhn endpoint=<nil>
Sep 02 15:31:57 ip-172-16-30-208 kubelet[9620]: I0902 15:31:57.496132 9620 qos_container_manager_linux.go:285] [ContainerManager]: Updated QoS cgroup configuration
Sep 02 15:31:57 ip-172-16-30-208 kubelet[9620]: I0902 15:31:57.968147 9620 reconciler.go:201] UnmountVolume operation started for volume "kubernetes.io/secret/GUID-default-token-wrlv3" (spec.Name: "default-token-wrlv3") from pod "GUID" (UID: "GUID").
Sep 02 15:31:57 ip-172-16-30-208 kubelet[9620]: I0902 15:31:57.968245 9620 reconciler.go:201] UnmountVolume operation started for volume "kubernetes.io/secret/GUID-token-key" (spec.Name: "token-key") from pod "GUID" (UID: "GUID").
Sep 02 15:31:57 ip-172-16-30-208 kubelet[9620]: E0902 15:31:57.968537 9620 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/secret/GUID-token-key\" (\"GUID\")" failed. No retries permitted until 2017-09-02 15:31:59.968508761 +0000 UTC (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "kubernetes.io/secret/GUID-token-key" (volume.spec.Name: "token-key") pod "GUID" (UID: "GUID") with: rename /var/lib/kubelet/pods/GUID/volumes/kubernetes.io~secret/token-key /var/lib/kubelet/pods/GUID/volumes/kubernetes.io~secret/wrapped_token-key.deleting~818780979: device or resource busy
Sep 02 15:31:57 ip-172-16-30-208 kubelet[9620]: E0902 15:31:57.968744 9620 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/secret/GUID-default-token-wrlv3\" (\"GUID\")" failed. No retries permitted until 2017-09-02 15:31:59.968719924 +0000 UTC (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "kubernetes.io/secret/GUID-default-token-wrlv3" (volume.spec.Name: "default-token-wrlv3") pod "GUID" (UID: "GUID") with: rename /var/lib/kubelet/pods/GUID/volumes/kubernetes.io~secret/default-token-wrlv3 /var/lib/kubelet/pods/GUID/volumes/kubernetes.io~secret/wrapped_default-token-wrlv3.deleting~940140790: device or resource busy
--
Sep 02 15:33:04 ip-172-16-30-208 kubelet[9620]: I0902 15:33:04.778742 9620 reconciler.go:363] Detached volume "kubernetes.io/secret/GUID-wrapped_default-token-wrlv3.deleting~940140790" (spec.Name: "wrapped_default-token-wrlv3.deleting~940140790") devicePath: ""
Sep 02 15:33:04 ip-172-16-30-208 kubelet[9620]: I0902 15:33:04.778753 9620 reconciler.go:363] Detached volume "kubernetes.io/secret/GUID-wrapped_token-key.deleting~850807831" (spec.Name: "wrapped_token-key.deleting~850807831") devicePath: ""
Sep 02 15:33:04 ip-172-16-30-208 kubelet[9620]: I0902 15:33:04.778764 9620 reconciler.go:363] Detached volume "kubernetes.io/secret/GUID-wrapped_token-key.deleting~413655961" (spec.Name: "wrapped_token-key.deleting~413655961") devicePath: ""
Sep 02 15:33:04 ip-172-16-30-208 kubelet[9620]: I0902 15:33:04.778774 9620 reconciler.go:363] Detached volume "kubernetes.io/secret/GUID-wrapped_token-key.deleting~818780979" (spec.Name: "wrapped_token-key.deleting~818780979") devicePath: ""
Sep 02 15:33:04 ip-172-16-30-208 kubelet[9620]: I0902 15:33:04.778784 9620 reconciler.go:363] Detached volume "kubernetes.io/secret/GUID-wrapped_token-key.deleting~348212189" (spec.Name: "wrapped_token-key.deleting~348212189") devicePath: ""
Sep 02 15:33:04 ip-172-16-30-208 kubelet[9620]: I0902 15:33:04.778796 9620 reconciler.go:363] Detached volume "kubernetes.io/secret/GUID-wrapped_token-key.deleting~848395852" (spec.Name: "wrapped_token-key.deleting~848395852") devicePath: ""
Sep 02 15:33:04 ip-172-16-30-208 kubelet[9620]: I0902 15:33:04.778808 9620 reconciler.go:363] Detached volume "kubernetes.io/secret/GUID-wrapped_default-token-wrlv3.deleting~610264100" (spec.Name: "wrapped_default-token-wrlv3.deleting~610264100") devicePath: ""
Sep 02 15:33:04 ip-172-16-30-208 kubelet[9620]: I0902 15:33:04.778820 9620 reconciler.go:363] Detached volume "kubernetes.io/secret/GUID-wrapped_token-key.deleting~960022821" (spec.Name: "wrapped_token-key.deleting~960022821") devicePath: ""
Sep 02 15:33:05 ip-172-16-30-208 kubelet[9620]: I0902 15:33:05.081380 9620 server.go:778] GET /stats/summary/: (37.027756ms) 200 [[Go-http-client/1.1] 10.0.46.202:54644]
Sep 02 15:33:05 ip-172-16-30-208 kubelet[9620]: I0902 15:33:05.185367 9620 operation_generator.go:597] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/GUID-calico-token-w8tzx" (spec.Name: "calico-token-w8tzx") pod "GUID" (UID: "GUID").
Sep 02 15:33:07 ip-172-16-30-208 kubelet[9620]: I0902 15:33:07.187953 9620 kubelet.go:1824] SyncLoop (DELETE, "api"): "my-pod-969733955-rbxhn_container-4-production(GUID)"
Sep 02 15:33:13 ip-172-16-30-208 kubelet[9620]: I0902 15:33:13.879940 9620 aws.go:937] Could not determine public DNS from AWS metadata.
Sep 02 15:33:20 ip-172-16-30-208 kubelet[9620]: I0902 15:33:20.736601 9620 server.go:778] GET /metrics: (53.063679ms) 200 [[Prometheus/1.7.1] 10.0.46.198:43576]
Sep 02 15:33:23 ip-172-16-30-208 kubelet[9620]: I0902 15:33:23.898078 9620 aws.go:937] Could not determine public DNS from AWS metadata.
ã芧ã®ãšããããã®ã¯ã©ã¹ã¿ãŒã«ã¯CNIçšã®CalicoããããŸãã
次ã®è¡ãç§ã®æ³šæãåŒããŸãïŒ
Sep 02 15:31:57 ip-172-16-30-208 kubelet[9620]: I0902 15:31:57.968245 9620 reconciler.go:201] UnmountVolume operation started for volume "kubernetes.io/secret/GUID-token-key" (spec.Name: "token-key") from pod "GUID" (UID: "GUID").
Sep 02 15:31:57 ip-172-16-30-208 kubelet[9620]: E0902 15:31:57.968537 9620 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/secret/GUID-token-key\" (\"GUID\")" failed. No retries permitted until 2017-09-02 15:31:59.968508761 +0000 UTC (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "kubernetes.io/secret/GUID-token-key" (volume.spec.Name: "token-key") pod "GUID" (UID: "GUID") with: rename /var/lib/kubelet/pods/GUID/volumes/kubernetes.io~secret/token-key /var/lib/kubelet/pods/GUID/volumes/kubernetes.io~secret/wrapped_token-key.deleting~818780979: device or resource busy
Sep 02 15:31:57 ip-172-16-30-208 kubelet[9620]: E0902 15:31:57.968744 9620 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/secret/GUID-default-token-wrlv3\" (\"GUID\")" failed. No retries permitted until 2017-09-02 15:31:59.968719924 +0000 UTC (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "kubernetes.io/secret/GUID-default-token-wrlv3" (volume.spec.Name: "default-token-wrlv3") pod "GUID" (UID: "GUID") with: rename
ããããã¹ã¿ãã¯ããŠãããã§ãŒãºãèŠã€ããããã®ããè¯ãæ¹æ³ã¯ãããŸããïŒ
kubectl delete pod xxx --now
ã¯ããªãããŸãæ©èœããŠããããã§ãããæ ¹æ¬çãªåå ãçªãæ¢ãã人éãšã®å¯Ÿè©±ãé¿ããããšæã£ãŠããŸãã
rename /var/lib/kubelet/pods/GUID/volumes/kubernetes.io~secret/token-key /var/lib/kubelet/pods/GUID/volumes/kubernetes.io~secret/wrapped_token-key.deleting~818780979: device or resource busy
ãã®ãããªãã¡ã€ã«åã®å€æŽãåå ã§ã kubelet/mount
ãconfigmapãããªã¥ãŒã ãšããŠããŠã³ãã§ããªãã£ãããã§ãã
@igorleaoããã¯åçŸå¯èœã§ããïŒ ãŸãã¯ãããã¯ããã»ã©å®å®ããŠããããææçºçããŸãã 念ã®ããã«ãç§ã¯ä»¥åã«ãã®ãããªãšã©ãŒã«ééããŸããã
@dixudxã¯ãç¹å®ã®ã¯ã©ã¹ã¿ãŒã§1æ¥ã«æ°åçºçããŸãã åãããŒãžã§ã³ã®kopsãškubernetesã§åãé±ã«äœæãããä»ã®ã¯ã©ã¹ã¿ãŒã¯ãåé¡ãªãæ©èœããŸãã
@igorleaoãã°ã瀺ãããã«ãããã€ã¹ãããžãŒã§ãããããããªã¥ãŒã ãããŒãžã£ãŒã
ãã£ã¬ã¯ããª/var/lib/kubelet/pods/GUID/volumes/kubernetes.io~secret/token-keyããŸã ããŠã³ããããŠãããã©ããã確èªããŠãã ããã ããããšãïŒ
@igorleaoã©ã®ããã«
åæ§ã®åäœãèŠãããŸãã kubeletãã³ã³ãããŒãšããŠå®è¡ãã /var/lib/kubelet
ãå
±æãšããŠããŠã³ãããããšã§åé¡ãéšåçã«è»œæžããŸããïŒããã©ã«ãã§ã¯ãdockerã¯ããªã¥ãŒã ãrslaveãšããŠããŠã³ãããŸãïŒã ããããããã§ãåæ§ã®åé¡ãçºçããŸãããé »åºŠã¯äœããªããŸãã çŸåšãä»ã®ããã€ãã®ããŠã³ãã¯å¥ã®æ¹æ³ã§è¡ãå¿
èŠããããšæããŸãïŒäŸïŒ /var/lib/docker
ãŸãã¯/rootfs
ïŒ
@stormltf kubeletã³ã³ããã®æ§æãæçš¿ããŠããã ããŸããïŒ
@stormltfã³ã³ããã§kubeletãå®è¡ããŠããŠã --containerized
ãã©ã°ã䜿çšããªãã§ãã ããïŒããŠã³ãã§ããã€ãã®ããªãã¯ãå®è¡ã
ã¹ã¿ãã¯ãããããã«ã€ããŠã次ã®ããšãè¡ã£ãŠãã ããã
ããããå®è¡ãããŠããããŒã
docker exec -ti /kubelet /bin/bash -c "mount | grep STUCK_POD_UUID"
mount | grep STUCK_POD_UUID
ãäœæããã°ããã®ãããã«ã€ããŠãåãããã«ããŠãã ããã ããã€ãã®/var/lib/kubelet
ããŠã³ãïŒäŸïŒdefault-secretïŒãèŠããšæããŸã
@stormltfæåã®2ã€ã®ããããäœæãããåŸãkubeletãåèµ·åããŸãããïŒ
@stormltf /var/lib/docker
ãš/rootfs
ãå
±æïŒdocker inspectã«ã¯è¡šç€ºãããŸããããã³ã³ãããŒå
ã«è¡šç€ºãããŸãïŒããŠã³ããã€ã³ãã«ããããšãã§ããŸãã
/ sigã¹ãã¬ãŒãž
äžéšã®äººã«ãšã£ãŠã¯ããã圹ç«ã€ãããããŸããã --containerized
ãã©ã°ã䜿çšããŠDockerã³ã³ãããŒã§kubeletãå®è¡ããŠãããå
±æããŠã³ããšããŠ/rootfs
ã /var/lib/docker
ãããã³/var/lib/kubelet
ãããŠã³ãããããšã§ãã®åé¡ã解決ã§ããŸããã æçµçãªããŠã³ãã¯æ¬¡ã®ããã«ãªããŸã
-v /:/rootfs:ro,shared \
-v /sys:/sys:ro \
-v /dev:/dev:rw \
-v /var/log:/var/log:rw \
-v /run/calico/:/run/calico/:rw \
-v /run/docker/:/run/docker/:rw \
-v /run/docker.sock:/run/docker.sock:rw \
-v /usr/lib/os-release:/etc/os-release \
-v /usr/share/ca-certificates/:/etc/ssl/certs \
-v /var/lib/docker/:/var/lib/docker:rw,shared \
-v /var/lib/kubelet/:/var/lib/kubelet:rw,shared \
-v /etc/kubernetes/ssl/:/etc/kubernetes/ssl/ \
-v /etc/kubernetes/config/:/etc/kubernetes/config/ \
-v /etc/cni/net.d/:/etc/cni/net.d/ \
-v /opt/cni/bin/:/opt/cni/bin/ \
詳现ã«ã€ããŠã¯ã ããã¯åé¡ãé©åã«è§£æ±ºããŸããããã¹ãŠã®ãã€ã³ãããŠã³ãã«ã€ããŠãkubeletã³ã³ããå ã«3ã€ã®ããŠã³ãïŒ2ã€ã®å¯çè«ïŒããããŸãã ãã ããå°ãªããšãå ±æããŠã³ãã䜿çšãããšãã¯ã³ã·ã§ããã§ç°¡åã«ã¢ã³ããŠã³ãã§ããŸãã
CoreOSã«ã¯ãã®åé¡ã¯ãããŸããã kubeletã³ã³ããã«ã¯dockerã§ã¯ãªãrktã䜿çšããããã§ãã kubeletãDockerã§å®è¡ãããkubelet continerå
ã®ãã¹ãŠã®ããŠã³ãã/var/lib/docker/overlay/...
ãš/rootfs
ææ¡ãããå Žåããã€ã³ãããŠã³ãããªã¥ãŒã ããšã«2ã€ã®å¯çããŠã³ãããããŸãã
/rootfs/var/lib/kubelet/<mount>
/rootfs
ãã1ã€/var/lib/docker/overlay/.../rootfs/var/lib/kubelet/<mount>
/var/lib/docker
ãã1ã€-v /dev:/dev:rw
-v /etc/cni:/etc/cni:ro
-v /opt/cni:/opt/cni:ro
-v /etc/ssl:/etc/ssl:ro
-v /etc/resolv.conf:/etc/resolv.conf
-v /etc/pki/tls:/etc/pki/tls:ro
-v /etc/pki/ca-trust:/etc/pki/ca-trust:ro
-v /sys:/sys:ro
-v /var/lib/docker:/var/lib/docker:rw
-v /var/log:/var/log:rw
-v /var/lib/kubelet:/var/lib/kubelet:shared
-v /var/lib/cni:/var/lib/cni:shared
-v /var/run:/var/run:rw
-v /www:/www:rw
-v /etc/kubernetes:/etc/kubernetes:ro
-v /etc/os-release:/etc/os-release:ro
-v /usr/share/zoneinfo/Asia/Shanghai:/etc/localtime:ro
Azureäžã®Kubernetes1.8.1ã§ãåãåé¡ãçºçããŸãããããã€ãå€æŽãããæ°ããããããéå§ãããåŸãå€ãããããçµäºããŸããã
IBMCloudã®Kubernetes1.8.2ã§ãåãåé¡ãçºçããŸãã æ°ããããããéå§ãããåŸãå€ããããã¯çµäºãç¶ããŸãã
kubectlããŒãžã§ã³
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.2-1+d150e4525193f1", GitCommit:"d150e4525193f1c79569c04efc14599d7deb5f3e", GitTreeState:"clean", BuildDate:"2017-10-27T08:15:17Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
kubectl delete pod xxx --now
ãškubectl delete pod foo --grace-period=0 --force
ã䜿çšããŸãããç¡é§ã«ãªããŸããã
æ ¹æ¬åå ãåãã§ããå ŽåïŒäžé©åã«ææ¡ãããããŠã³ãïŒãããã¯ãã£ã¹ããªãã¥ãŒã·ã§ã³åºæã®ãã°imoã§ãã
IBMã¯ã©ãŠãã§kubeletrunãå®è¡ããæ¹æ³ã説æããŠãã ããã systemdãŠãããïŒ --containerized
ãã©ã°ã¯ãããŸããïŒ
--containerizedãã©ã°ãfalseã«èšå®ããŠå®è¡ãããŸãã
systemctl status kubelet.service
kubelet.service - Kubernetes Kubelet
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2017-11-19 21:48:48 UTC; 4 days ago
-ã³ã³ããåããããã©ã°ïŒããã
ããããŸããã詳现ãå¿ èŠã§ããäžèšã®ã³ã¡ã³ããã芧ãã ããhttps://github.com/kubernetes/kubernetes/issues/51835#issuecomment-333090349
ãŸãã /lib/systemd/system/kubelet.service
å
容ã衚瀺ããŠãã ããããŸãã /etc/systemd/system
kubeletã«ã€ããŠäœãããã°ãå
±æããŠãã ããã
ç¹ã«ãkubeletãdockerã§å®è¡ãããŠããå Žåããã¹ãŠã®ãã€ã³ãããŠã³ã-v
ã確èªããããšæããŸãã
ä»æ¥ã説æãããã®ãšåãåé¡ãçºçããŸãããã客æ§ã®ã·ã¹ãã ã®1ã€ã«ãããããããæ°æ¥éçµäºç¶æ ã§ã¹ã¿ãã¯ããŠããŸããã ãŸããããšã©ãŒïŒããªã¥ãŒã ã«å¯ŸããŠUnmountVolume.TearDownã倱æããŸãããã«é¢ãããšã©ãŒãçºçããã¹ã¿ãã¯ãããããããšã«ãããã€ã¹ãŸãã¯ãªãœãŒã¹ãããžãŒã§ãããç¹°ãè¿ãããŠããŸããã
ç§ãã¡ã®å Žåããã®mobyã®åé¡ã§ã«ããŒãããŠããRHEL / Centos 7.4ããŒã¹ã®ã·ã¹ãã ã®dockerã«åé¡ãããããã§ãïŒ https ïŒ httpsïŒ// github .com / moby / moby / pull / 34886 / files
ç§ãã¡ã«ãšã£ãŠã¯ãsysctlãªãã·ã§ã³fs.may_detach_mounts = 1ãæ°å以å ã«èšå®ãããšããã¹ãŠã®çµäºããããã¯ãªãŒã³ã¢ãããããŸããã
ç§ããã®åé¡ã«çŽé¢ããŠããŸãïŒãããã¯1.8.3ã§çµäºç¶æ ã§ã¹ã¿ãã¯ããŸããã
ããŒãããã®é¢é£ããkubeletãã°ïŒ
Nov 28 22:48:51 <my-node> kubelet[1010]: I1128 22:48:51.616749 1010 reconciler.go:186] operationExecutor.UnmountVolume started for volume "nfs-mtkylje2oc4xlju1ls9rdwjlcmxhyi1ydw" (UniqueName: "kubernetes.io/nfs/58dc413c-d4d1-11e7-870d-3c970e298d91-nfs-mtkylje2oc4xlju1ls9rdwjlcmxhyi1ydw") pod "58dc413c-d4d1-11e7-870d-3c970e298d91" (UID: "58dc413c-d4d1-11e7-870d-3c970e298d91")
Nov 28 22:48:51 <my-node> kubelet[1010]: W1128 22:48:51.616762 1010 util.go:112] Warning: "/var/lib/kubelet/pods/58dc413c-d4d1-11e7-870d-3c970e298d91/volumes/kubernetes.io~nfs/nfs-mtkylje2oc4xlju1ls9rdwjlcmxhyi1ydw" is not a mountpoint, deleting
Nov 28 22:48:51 <my-node> kubelet[1010]: E1128 22:48:51.616828 1010 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/nfs/58dc413c-d4d1-11e7-870d-3c970e298d91-nfs-mtkylje2oc4xlju1ls9rdwjlcmxhyi1ydw\" (\"58dc413c-d4d1-11e7-870d-3c970e298d91\")" failed. No retries permitted until 2017-11-28 22:48:52.616806562 -0800 PST (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "nfs-mtkylje2oc4xlju1ls9rdwjlcmxhyi1ydw" (UniqueName: "kubernetes.io/nfs/58dc413c-d4d1-11e7-870d-3c970e298d91-nfs-mtkylje2oc4xlju1ls9rdwjlcmxhyi1ydw") pod "58dc413c-d4d1-11e7-870d-3c970e298d91" (UID: "58dc413c-d4d1-11e7-870d-3c970e298d91") : remove /var/lib/kubelet/pods/58dc413c-d4d1-11e7-870d-3c970e298d91/volumes/kubernetes.io~nfs/nfs-mtkylje2oc4xlju1ls9rdwjlcmxhyi1ydw: directory not empty
Nov 28 22:48:51 <my-node> kubelet[1010]: W1128 22:48:51.673774 1010 docker_sandbox.go:343] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "<pod>": CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "f58ab11527aef5133bdb320349fe14fd94211aa0d35a1da006aa003a78ce0653"
Kubeletã¯ãUbuntu 16.04ã§systemdãŠãããïŒã³ã³ãããŒå
ã§ã¯ãªãïŒãšããŠå®è¡ãããŠããŸãã
ã芧ã®ãšãããNFSãµãŒããŒãžã®ããŠã³ãããããkubeletã¯ãã®ãã£ã¬ã¯ããªãããŠã³ããããŠããªããšèŠãªããŠãããããã©ããããããããŠã³ããã£ã¬ã¯ããªãåé€ããããšããŸããã
ãããããã®ããªã¥ãŒã ä»æ§ïŒ
volumes:
- name: nfs-mtkylje2oc4xlju1ls9rdwjlcmxhyi1ydw
nfs:
path: /<path>
server: <IP>
- name: default-token-rzqtt
secret:
defaultMode: 420
secretName: default-token-rzqtt
UPD ïŒ1.6.6ã§ã以åã«ãã®åé¡ã«çŽé¢ããŸãã
Azureã§åãããšãäœéšããŸãã
NAME READY STATUS RESTARTS AGE IP NODE
busybox2-7db6d5d795-fl6h9 0/1 Terminating 25 1d 10.200.1.136 worker-1
busybox3-69d4f5b66c-2lcs6 0/1 Terminating 26 1d <none> worker-2
busybox7-797cc644bc-n5sv2 0/1 Terminating 26 1d <none> worker-2
busybox8-c8f95d979-8lk27 0/1 Terminating 25 1d 10.200.1.137 worker-1
nginx-56ccc998dd-hvpng 0/1 Terminating 0 2h <none> worker-1
nginx-56ccc998dd-nnsvj 0/1 Terminating 0 2h <none> worker-2
nginx-56ccc998dd-rsrvq 0/1 Terminating 0 2h <none> worker-1
kubectlããŒãžã§ã³
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:46:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
ãããnginx-56ccc998dd-nnsvjã«ã€ããŠèª¬æããŸã
Name: nginx-56ccc998dd-nnsvj
Namespace: default
Node: worker-2/10.240.0.22
Start Time: Wed, 29 Nov 2017 13:33:39 +0400
Labels: pod-template-hash=1277755488
run=nginx
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"nginx-56ccc998dd","uid":"614f71db-d4e8-11e7-9c45-000d3a25e3c0","...
Status: Terminating (expires Wed, 29 Nov 2017 15:13:44 +0400)
Termination Grace Period: 30s
IP:
Created By: ReplicaSet/nginx-56ccc998dd
Controlled By: ReplicaSet/nginx-56ccc998dd
Containers:
nginx:
Container ID: containerd://d00709dfb00ed5ac99dcd092978e44fc018f44cca5229307c37d11c1a4fe3f07
Image: nginx:1.12
Image ID: docker.io/library/nginx<strong i="12">@sha256</strong>:5269659b61c4f19a3528a9c22f9fa8f4003e186d6cb528d21e411578d1e16bdb
Port: <none>
State: Terminated
Exit Code: 0
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Mon, 01 Jan 0001 00:00:00 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jm7h5 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-jm7h5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jm7h5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Killing 41m kubelet, worker-2 Killing container with id containerd://nginx:Need to kill Pod
sudo journalctl -u kubelet | grep "nginx-56ccc998dd-nnsvj"
Nov 29 09:33:39 worker-2 kubelet[64794]: I1129 09:33:39.124779 64794 kubelet.go:1837] SyncLoop (ADD, "api"): "nginx-56ccc998dd-nnsvj_default(6171e2a7-d4e8-11e7-9c45-000d3a25e3c0)"
Nov 29 09:33:39 worker-2 kubelet[64794]: I1129 09:33:39.160444 64794 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-jm7h5" (UniqueName: "kubernetes.io/secret/6171e2a7-d4e8-11e7-9c45-000d3a25e3c0-default-token-jm7h5") pod "nginx-56ccc998dd-nnsvj" (UID: "6171e2a7-d4e8-11e7-9c45-000d3a25e3c0")
Nov 29 09:33:39 worker-2 kubelet[64794]: I1129 09:33:39.261128 64794 reconciler.go:257] operationExecutor.MountVolume started for volume "default-token-jm7h5" (UniqueName: "kubernetes.io/secret/6171e2a7-d4e8-11e7-9c45-000d3a25e3c0-default-token-jm7h5") pod "nginx-56ccc998dd-nnsvj" (UID: "6171e2a7-d4e8-11e7-9c45-000d3a25e3c0")
Nov 29 09:33:39 worker-2 kubelet[64794]: I1129 09:33:39.286574 64794 operation_generator.go:484] MountVolume.SetUp succeeded for volume "default-token-jm7h5" (UniqueName: "kubernetes.io/secret/6171e2a7-d4e8-11e7-9c45-000d3a25e3c0-default-token-jm7h5") pod "nginx-56ccc998dd-nnsvj" (UID: "6171e2a7-d4e8-11e7-9c45-000d3a25e3c0")
Nov 29 09:33:39 worker-2 kubelet[64794]: I1129 09:33:39.431485 64794 kuberuntime_manager.go:370] No sandbox for pod "nginx-56ccc998dd-nnsvj_default(6171e2a7-d4e8-11e7-9c45-000d3a25e3c0)" can be found. Need to start a new one
Nov 29 09:33:42 worker-2 kubelet[64794]: I1129 09:33:42.449592 64794 kubelet.go:1871] SyncLoop (PLEG): "nginx-56ccc998dd-nnsvj_default(6171e2a7-d4e8-11e7-9c45-000d3a25e3c0)", event: &pleg.PodLifecycleEvent{ID:"6171e2a7-d4e8-11e7-9c45-000d3a25e3c0", Type:"ContainerStarted", Data:"0f539a84b96814651bb199e91f71157bc90c6e0c26340001c3f1c9f7bd9165af"}
Nov 29 09:33:47 worker-2 kubelet[64794]: I1129 09:33:47.637988 64794 kubelet.go:1871] SyncLoop (PLEG): "nginx-56ccc998dd-nnsvj_default(6171e2a7-d4e8-11e7-9c45-000d3a25e3c0)", event: &pleg.PodLifecycleEvent{ID:"6171e2a7-d4e8-11e7-9c45-000d3a25e3c0", Type:"ContainerStarted", Data:"d00709dfb00ed5ac99dcd092978e44fc018f44cca5229307c37d11c1a4fe3f07"}
Nov 29 11:13:14 worker-2 kubelet[64794]: I1129 11:13:14.468137 64794 kubelet.go:1853] SyncLoop (DELETE, "api"): "nginx-56ccc998dd-nnsvj_default(6171e2a7-d4e8-11e7-9c45-000d3a25e3c0)"
Nov 29 11:13:14 worker-2 kubelet[64794]: E1129 11:13:14.711891 64794 kuberuntime_manager.go:840] PodSandboxStatus of sandbox "0f539a84b96814651bb199e91f71157bc90c6e0c26340001c3f1c9f7bd9165af" for pod "nginx-56ccc998dd-nnsvj_default(6171e2a7-d4e8-11e7-9c45-000d3a25e3c0)" error: rpc error: code = Unknown desc = failed to get task status for sandbox container "0f539a84b96814651bb199e91f71157bc90c6e0c26340001c3f1c9f7bd9165af": process id 0f539a84b96814651bb199e91f71157bc90c6e0c26340001c3f1c9f7bd9165af not found: not found
Nov 29 11:13:14 worker-2 kubelet[64794]: E1129 11:13:14.711933 64794 generic.go:241] PLEG: Ignoring events for pod nginx-56ccc998dd-nnsvj/default: rpc error: code = Unknown desc = failed to get task status for sandbox container "0f539a84b96814651bb199e91f71157bc90c6e0c26340001c3f1c9f7bd9165af": process id 0f539a84b96814651bb199e91f71157bc90c6e0c26340001c3f1c9f7bd9165af not found: not found
Nov 29 11:13:15 worker-2 kubelet[64794]: I1129 11:13:15.788179 64794 kubelet.go:1871] SyncLoop (PLEG): "nginx-56ccc998dd-nnsvj_default(6171e2a7-d4e8-11e7-9c45-000d3a25e3c0)", event: &pleg.PodLifecycleEvent{ID:"6171e2a7-d4e8-11e7-9c45-000d3a25e3c0", Type:"ContainerDied", Data:"d00709dfb00ed5ac99dcd092978e44fc018f44cca5229307c37d11c1a4fe3f07"}
Nov 29 11:13:15 worker-2 kubelet[64794]: I1129 11:13:15.788221 64794 kubelet.go:1871] SyncLoop (PLEG): "nginx-56ccc998dd-nnsvj_default(6171e2a7-d4e8-11e7-9c45-000d3a25e3c0)", event: &pleg.PodLifecycleEvent{ID:"6171e2a7-d4e8-11e7-9c45-000d3a25e3c0", Type:"ContainerDied", Data:"0f539a84b96814651bb199e91f71157bc90c6e0c26340001c3f1c9f7bd9165af"}
Nov 29 11:46:45 worker-2 kubelet[42337]: I1129 11:46:45.384411 42337 kubelet.go:1837] SyncLoop (ADD, "api"): "nginx-56ccc998dd-nnsvj_default(6171e2a7-d4e8-11e7-9c45-000d3a25e3c0), kubernetes-dashboard-7486b894c6-2xmd5_kube-system(e55ca22c-d416-11e7-9c45-000d3a25e3c0), busybox3-69d4f5b66c-2lcs6_default(adb05024-d412-11e7-9c45-000d3a25e3c0), kube-dns-7797cb8758-zblzt_kube-system(e925cbec-d40b-11e7-9c45-000d3a25e3c0), busybox7-797cc644bc-n5sv2_default(b7135a8f-d412-11e7-9c45-000d3a25e3c0)"
Nov 29 11:46:45 worker-2 kubelet[42337]: I1129 11:46:45.387169 42337 kubelet.go:1871] SyncLoop (PLEG): "nginx-56ccc998dd-nnsvj_default(6171e2a7-d4e8-11e7-9c45-000d3a25e3c0)", event: &pleg.PodLifecycleEvent{ID:"6171e2a7-d4e8-11e7-9c45-000d3a25e3c0", Type:"ContainerDied", Data:"d00709dfb00ed5ac99dcd092978e44fc018f44cca5229307c37d11c1a4fe3f07"}
Nov 29 11:46:45 worker-2 kubelet[42337]: I1129 11:46:45.387245 42337 kubelet.go:1871] SyncLoop (PLEG): "nginx-56ccc998dd-nnsvj_default(6171e2a7-d4e8-11e7-9c45-000d3a25e3c0)", event: &pleg.PodLifecycleEvent{ID:"6171e2a7-d4e8-11e7-9c45-000d3a25e3c0", Type:"ContainerDied", Data:"0f539a84b96814651bb199e91f71157bc90c6e0c26340001c3f1c9f7bd9165af"}
cat /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=cri-containerd.service
Requires=cri-containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \
--allow-privileged=true \
--anonymous-auth=false \
--authorization-mode=Webhook \
--client-ca-file=/var/lib/kubernetes/ca.pem \
--cluster-dns=10.32.0.10 \
--cluster-domain=cluster.local \
--container-runtime=remote \
--container-runtime-endpoint=unix:///var/run/cri-containerd.sock \
--image-pull-progress-deadline=2m \
--kubeconfig=/var/lib/kubelet/kubeconfig \
--network-plugin=cni \
--pod-cidr=10.200.2.0/24 \
--register-node=true \
--require-kubeconfig \
--runtime-request-timeout=15m \
--tls-cert-file=/var/lib/kubelet/worker-2.pem \
--tls-private-key-file=/var/lib/kubelet/worker-2-key.pem \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
åé¡ã«é¢é£ããããŸããŸãªãã°ãããããã§ãã 1.8.3ã¯ã©ã¹ã¿ãŒã«ã¯äž¡æ¹ããããŸãã
Error: UnmountVolume.TearDown failed for volume "nfs-test" (UniqueName: "kubernetes.io/nfs/39dada78-d9cc-11e7-870d-3c970e298d91-nfs-test") pod "39dada78-d9cc-11e7-870d-3c970e298d91" (UID: "39dada78-d9cc-11e7-870d-3c970e298d91") : remove /var/lib/kubelet/pods/39dada78-d9cc-11e7-870d-3c970e298d91/volumes/kubernetes.io~nfs/nfs-test: directory not empty
ãããŠãããã¯æ¬åœã®ãã£ã¬ã¯ããªã空ã§ã¯ãªããããŠã³ã解é€ãããŠãããããµããã¹ããã£ã¬ã¯ããªãå«ãŸããŠããŸãïŒ
ãã®ãããªæ¯ãèãã®èª¬æã®1ã€ïŒ
ãã®ä»ã®ãã°ïŒ
Dec 5 15:57:08 ASRock kubelet[2941]: I1205 15:57:08.333877 2941 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "nfs-mtkylje2oc4xlju1ls9rdwjlcmxhyi1ydw" (UniqueName: "kubernetes.io/nfs/005b4bb9-da18-11e7-870d-3c970e298d91-nfs-mtkylje2oc4xlju1ls9rdwjlcmxhyi1ydw") pod "test-df5d868fc-sclj5" (UID: "005b4bb9-da18-11e7-870d-3c970e298d91")
Dec 5 15:57:08 ASRock systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/005b4bb9-da18-11e7-870d-3c970e298d91/volumes/kubernetes.io~nfs/nfs-mtkylje2oc4xlju1ls9rdwjlcmxhyi1ydw.
Dec 5 15:57:12 ASRock kubelet[2941]: I1205 15:57:12.266404 2941 reconciler.go:186] operationExecutor.UnmountVolume started for volume "nfs-mtkylje2oc4xlju1ls9rdwjlcmxhyi1ydw" (UniqueName: "kubernetes.io/nfs/005b4bb9-da18-11e7-870d-3c970e298d91-nfs-mtkylje2oc4xlju1ls9rdwjlcmxhyi1ydw") pod "005b4bb9-da18-11e7-870d-3c970e298d91" (UID: "005b4bb9-da18-11e7-870d-3c970e298d91")
Dec 5 15:57:12 ASRock kubelet[2941]: E1205 15:57:12.387179 2941 nestedpendingoperations.go:264] Operation for "\"kubernetes.io/nfs/005b4bb9-da18-11e7-870d-3c970e298d91-nfs-mtkylje2oc4xlju1ls9rdwjlcmxhyi1ydw\" (\"005b4bb9-da18-11e7-870d-3c970e298d91\")" failed. No retries permitted until 2017-12-05 15:57:12.887062059 -0800 PST (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "nfs-mtkylje2oc4xlju1ls9rdwjlcmxhyi1ydw" (UniqueName: "kubernetes.io/nfs/005b4bb9-da18-11e7-870d-3c970e298d91-nfs-mtkylje2oc4xlju1ls9rdwjlcmxhyi1ydw") pod "005b4bb9-da18-11e7-870d-3c970e298d91" (UID: "005b4bb9-da18-11e7-870d-3c970e298d91") : remove /var/lib/kubelet/pods/005b4bb9-da18-11e7-870d-3c970e298d91/volumes/kubernetes.io~nfs/nfs-mtkylje2oc4xlju1ls9rdwjlcmxhyi1ydw: directory not empty
ã©ãããããããããã€ãã®ã¯ãªãŒã³ã¢ããããã»ã¹ïŒïŒdswp * DesiredStateOfWorldPopulatorïŒfindAndRemoveDeletedPodsïŒïŒïŒã¯ãããããåæåç¶æ ã«ãããšãã«ããªã¥ãŒã ã®ã¢ã³ããŠã³ããéå§ããŸãã
Dec 6 14:40:20 ASRock kubelet[15875]: I1206 14:40:20.620655 15875 kubelet_pods.go:886] Pod "test-84cd5ff8dc-kpv7b_4281-kuberlab-test(6e99a8df-dad6-11e7-b35c-3c970e298d91)" is terminated, but some volumes have not been cleaned up
Dec 6 14:40:20 ASRock kubelet[15875]: I1206 14:40:20.686449 15875 kubelet_pods.go:1730] Orphaned pod "6e99a8df-dad6-11e7-b35c-3c970e298d91" found, but volumes not yet removed
Dec 6 14:40:20 ASRock kubelet[15875]: I1206 14:40:20.790719 15875 kuberuntime_container.go:100] Generating ref for container test: &v1.ObjectReference{Kind:"Pod", Namespace:"4281-kuberlab-test", Name:"test-84cd5ff8dc-kpv7b", UID:"6e99a8df-dad6-11e7-b35c-3c970e298d91", APIVersion:"v1", ResourceVersion:"2639758", FieldPath:"spec.containers{test}"}
Dec 6 14:40:20 ASRock kubelet[15875]: I1206 14:40:20.796643 15875 docker_service.go:407] Setting cgroup parent to: "/kubepods/burstable/pod6e99a8df-dad6-11e7-b35c-3c970e298d91"
ãããã®åæåãšåé€ãåæã«å®è¡ãããŠããŸãã
ãã°ãç¹°ãè¿ãã«ã¯ãçŽ10åã®ãããã€ã¡ã³ãïŒåäžã®ãããªã³ã§ãã¹ãæžã¿ïŒãéå§ããŠããã«åé€/æŽæ°ããå¿
èŠããããŸãããããããããŠã³ãæäœã¯ããã»ã©é«éã§ã¯ãªãã¯ãã§ãã
GKEã®åããã°ã®åœ±é¿ãåããŸãã ãã®åé¡ã®æ¢ç¥ã®åé¿çã¯ãããŸããïŒ --now
ã¯æ©èœããŸããã
ãã®ãã°ã¯ä¿®æ£ãããŠããŸãããkubernetesããŒã ã«ãã£ãŠããŒãžããããã©ããã¯ããããŸããã
@dreykãã®ãã°ã«ã€ããŠçºèŠããããšãšãã¹ãã¬ãŒãžããŒã ã確èªã§ããããã«ä¿®æ£ããããšã«ã€ããŠã詳现ãæããŠãã ããã ããããšãïŒ
@ gm42 GKEã§ãã®åé¡ãæåã§
docker ps | grep {pod name}
ãå®è¡ããŠDockerã³ã³ããIDãååŸããdocker rm -f {container id}
GKEã§ã¯ãããŒãã®ã¢ããã°ã¬ãŒããããã«åœ¹ç«ã¡ãŸããã
kubeadm
ã䜿çšããŠèšå®ãããããŒã«ã«ã¯ã©ã¹ã¿ãŒã«åããã°ããããŸãã
ããŒãã®docker ps | grep {pod name}
ã«ã¯äœã衚瀺ãããããããã¯çµäºç¶æ
ã§ã¹ã¿ãã¯ããŠããŸãã çŸåšããã®ç¶æ
ã®ãããã2ã€ãããŸãã
ãããã匷å¶çã«åé€ããã«ã¯ã©ãããã°ããã§ããïŒ ãŸãã¯ããããã®ååãå€æŽããŸããïŒ åãååã§å¥ã®ããããèµ·åããããšã¯ã§ããŸããã ããããšãïŒ
1.7.2ã¯ã©ã¹ã¿ãŒã§çç±ãèŠã€ããŸããã
å¥ã®ç£èŠããã°ã©ã ãã«ãŒããã¹ãããŠã³ããããã/
ã«ãŒããã¹ã«ã¯/var/lib/kubelet/pods/ddc66e10-0711-11e8-b905-6c92bf70b164/volumes/kubernetes.io~secret/default-token-bnttf
ãå«ãŸããŠããŸã
ãããã£ãŠãkubeletãããããåé€ããŠããããªã¥ãŒã ã解æŸã§ããªãå Žåãã¡ãã»ãŒãžã¯æ¬¡ã®ããã«ãªããŸãã
ããã€ã¹ãŸãã¯ãªãœãŒã¹ãããžãŒ
æé ïŒ
1ïŒsudo journalctl -u kubelet
ãã®ã·ã§ã«ã¯ããšã©ãŒã¡ãã»ãŒãžãèŠã€ããã®ã«åœ¹ç«ã¡ãŸãã
2ïŒsudodockeræ€æ»
io.kubernetes.pod.uidãèŠã€ããŸã "ïŒ" ddc66e10-0711-11e8-b905-6c92bf70b164 "
ãããŠ
HostConfig-> Bindings-> "/var/lib/kubelet/pods/ddc66e10-0711-11e8-b905-6c92bf70b164/volumes/kubernetes.io~secret/default-token-bnttf:/var/run/secrets/kubernetes .io / serviceaccountïŒro "
3ïŒgrep -l ddc66e10-0711-11e8-b905-6c92bf70b164 / proc / * / mountinfo
/ proc / 90225 / mountinfo
5ïŒps aux | grep 90225
ã«ãŒã902251.3 0.0 2837164 42580ïŒ Ssl Feb01 72:40 ./monitor_program
1.7.2ã«ãåããã°ããããŸã
operationExecutor.UnmountVolumeãããªã¥ãŒã "default-token-bnttf"ïŒäžæã®ååïŒ "kubernetes.io/secret/ddc66e10-0711-11e8-b905-6c92bf70b164-default-token-bnttf"ïŒããã "ddc66e10-0711-11e8-b905-ã«å¯ŸããŠéå§ãããŸãã6c92bf70b164 "kubelet [94382]ïŒE0205 11ïŒ35ïŒ50.509169 94382nestedpendingoperations.goïŒ262]" \ "kubernetes.io/secret/ddc66e10-0711-11e8-b905-6c92bf70b164-default-token-bnttf \"ïŒ\ ãddc66e10-0711-11e8-b905-6c92bf70b164 \ "ïŒãã倱æããŸããã 2018-02-05 11ïŒ37ïŒ52.509148953 +0800 CSTïŒdurationBeforeRetry 2m2sïŒãŸã§åè©Šè¡ã¯èš±å¯ãããŠããŸããã ãšã©ãŒïŒããªã¥ãŒã "default-token-bnttf"ã®UnmountVolume.TearDownã倱æããŸããïŒäžæã®ååïŒ "kubernetes.io/secret/ddc66e10-0711-11e8-b905-6c92bf70b164-default-token-bnttf"ïŒããã "ddc66e10-0711-11e8- b905-6c92bf70b164 "ïŒUIDïŒ" ddc66e10-0711-11e8-b905-6c92bf70b164 "ïŒïŒ/ var / lib / kubelet / pods / ddc66e10-0711-11e8-b905-6c92bf70b164 / volumes / kubernetes.ioãsecret / default-ãåé€token-bnttfïŒããã€ã¹ãŸãã¯ãªãœãŒã¹ãããžãŒã§ã
DockerãµãŒãã¹ãåèµ·åãããšããã¯ã解é€ããããããã¯æ°å以å ã«åé€ãããŸãã ããã¯ãã°ã§ãã Docker17.03ã®äœ¿çš
Azureã®åãåé¡ãKube 1.8.7
æ°ååã®1.8.9ã§ãç§ãã¡ã«èµ·ãããŸãã-誰ããããã解決ããããšãæ¢ããŠããŸããïŒ dockerãåèµ·åãããšåœ¹ç«ã¡ãŸãããå°ãã°ãããŠããŸãã
ããã¯ãGKEã®ææ°ã®1.9.4ãªãªãŒã¹ã§ç§ã«ããèµ·ãã£ãŠããŸãã ä»ã®ãšãããããè¡ã£ãŠããŸãïŒ
kubectl delete pod NAME --grace-period=0 --force
ããGKE1.9.4-gke.1ã§ãåãåé¡
ããªã¥ãŒã ããŠã³ãã«é¢é£ããŠããããã§ãã
ããã¯ã次ã®ããã«èšå®ããããã¡ã€ã«ããŒãã§æ¯åçºçããŸãã
https://github.com/elastic/beats/tree/master/deploy/kubernetes/filebeat
Kubeletãã°ã¯ããã瀺ããŠããŸãïŒ
Mar 23 19:44:16 gke-testing-c2m4-1-97b57429-40jp kubelet[1361]: I0323 19:44:16.380949 1361 reconciler.go:191] operationExecutor.UnmountVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/9a5f1519-2d39-11e8-bec8-42010a8400f3-config") pod "9a5f1519-2d39-11e8-bec8-42010a8400f3" (UID: "9a5f1519-2d39-11e8-bec8-42010a8400f3")
Mar 23 19:44:16 gke-testing-c2m4-1-97b57429-40jp kubelet[1361]: E0323 19:44:16.382032 1361 nestedpendingoperations.go:263] Operation for "\"kubernetes.io/configmap/9a5f1519-2d39-11e8-bec8-42010a8400f3-config\" (\"9a5f1519-2d39-11e8-bec8-42010a8400f3\")" failed. No retries permitted until 2018-03-23 19:44:32.381982706 +0000 UTC m=+176292.263058344 (durationBeforeRetry 16s). Error: "error cleaning subPath mounts for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a5f1519-2d39-11e8-bec8-42010a8400f3-config\") pod \"9a5f1519-2d39-11e8-bec8-42010a8400f3\" (UID: \"9a5f1519-2d39-11e8-bec8-42010a8400f3\") : error checking /var/lib/kubelet/pods/9a5f1519-2d39-11e8-bec8-42010a8400f3/volume-subpaths/config/filebeat/0 for mount: lstat /var/lib/kubelet/pods/9a5f1519-2d39-11e8-bec8-42010a8400f3/volume-ubpaths/config/filebeat/0/..: not a directory"
kubectl delete pod NAME --grace-period=0 --force
ããŸãããããã§ãã
kubeletã®åèµ·åãåäœããŸãã
ããGKE1.9.4-gke.1ã§ãåãåé¡
ç¹å®ã®ãã¡ã€ã«ããŒãããŒã¢ã³ã»ããã§ã®ã¿çºçããŸããããã¹ãŠã®ããŒããåäœæããŠãå¹æã¯ãªããçºçãç¶ããŸãã
@Tapppiã®ãããªGKE1.9.4 -gke.1ã§ããã®åé¡ãçºçããŸã-ãããã¯ãã¹ãããŒãã®dockerããŒã¢ã³ããåé€ãããŸããããkubernetesã§ã¯TERMINATING
ã§ã¹ã¿ãã¯ããŠããŸãã
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulMountVolume 43m kubelet, gke-delivery-platform-custom-pool-c9b9fe86-fgvh MountVolume.SetUp succeeded for volume "data"
Normal SuccessfulMountVolume 43m kubelet, gke-delivery-platform-custom-pool-c9b9fe86-fgvh MountVolume.SetUp succeeded for volume "varlibdockercontainers"
Normal SuccessfulMountVolume 43m kubelet, gke-delivery-platform-custom-pool-c9b9fe86-fgvh MountVolume.SetUp succeeded for volume "prospectors"
Normal SuccessfulMountVolume 43m kubelet, gke-delivery-platform-custom-pool-c9b9fe86-fgvh MountVolume.SetUp succeeded for volume "config"
Normal SuccessfulMountVolume 43m kubelet, gke-delivery-platform-custom-pool-c9b9fe86-fgvh MountVolume.SetUp succeeded for volume "filebeat-token-v74k6"
Normal Pulled 43m kubelet, gke-delivery-platform-custom-pool-c9b9fe86-fgvh Container image "docker.elastic.co/beats/filebeat:6.1.2" already present on machine
Normal Created 43m kubelet, gke-delivery-platform-custom-pool-c9b9fe86-fgvh Created container
Normal Started 43m kubelet, gke-delivery-platform-custom-pool-c9b9fe86-fgvh Started container
Normal Killing <invalid> kubelet, gke-delivery-platform-custom-pool-c9b9fe86-fgvh Killing container with id docker://filebeat:Need to kill Pod
/Users/karl.stoney/git/autotrader/terraform-gcp git/master
ç§ãã¡ã«ãšã£ãŠãã¡ãã£ãšåã«äœãæ°ããããšãèµ·ãããŸããã kubectl delete pod NAME --grace-period=0 --force
ã䜿çšããŠã¹ã¿ãã¯ãããããã匷å¶çã«åé€ãããšãããã®ãããããã£ãããŒããç°åžžã«ãªããŸããã docker 17-12CEãå®è¡ããŠããããã®ããã¯ã¹ã§docker deamonãåèµ·åãããšãããŒãã®ã³ã«ã¯ã解é€ãããŸããã
1.9.4-gke.1ã§ãã®åé¡ãçºçããŠããå Žåã¯ã httpsïŒ//github.com/kubernetes/kubernetes/issues/61178ãåå ã§ããå¯èœæ§ããã@zackify @ nodefactory-bk @Tapppi @Stono
IIUCããã®ãã°ã®å ã®åé¡ã¯ãã³ã³ããåãããkubeletã®æ§æã«é¢é£ããŠããŸãããããã¯ç°ãªããŸãã
ãšããã§ãããŒãžã§ã³v1.9.3-gke.0
æ°ããããŒãããŒã«ãäœæããããšã¯ãããã«å¯Ÿããåé¿çv1.9.5
ã¯ãŸã gkeã§å±éãããŠãããããã§ã«ã€ãŒã¹ã¿ãŒã§ããããã§ãã
ãããããŒãžã§ã³1.9.3以éã§ä¿®æ£ãããŠããããšã誰ãã確èªã§ããŸããïŒ ãã®åäœã®ããã«æ·±å»ãªåé¡ãçºçãããããçºçãããã³ã«dockerãåèµ·åããã®ã¯éåžžã«å°é£ã§ãã
ç§ã«ãšã£ãŠã¯1.9.6ã«ä¿®æ£ãããŸãã
2018幎4æ4æ¥æ°Žææ¥ãåå11:43 sokoowã notifications @ github.comã¯æ¬¡ã®ããã«æžããŠããŸãã
ãããããŒãžã§ã³1.9.3以éã§ä¿®æ£ãããŠããããšã誰ãã確èªã§ããŸããïŒ æã ã¯æã£ãŠããŸã
ãã®åäœã®ããã«ããã€ãã®æ·±å»ãªåé¡ãçºçããããããDockerãåèµ·åããŸã
ãããçºçããæéã¯soost00pidã§ããâ
ããªããèšåãããã®ã§ããªãã¯ãããåãåã£ãŠããŸãã
ãã®ã¡ãŒã«ã«çŽæ¥è¿ä¿¡ããGitHubã§è¡šç€ºããŠãã ãã
https://github.com/kubernetes/kubernetes/issues/51835#issuecomment-378557636 ã
ãŸãã¯ã¹ã¬ããããã¥ãŒãããŸã
https://github.com/notifications/unsubscribe-auth/ABaviW5yfj64zVjBYFGUToe2MH3dKwpTks5tlKPNgaJpZM4PKs9r
ã
ããŠã @ Stonoã«æè¬ã
#!/bin/bash
/usr/bin/docker run \
--net=host \
--pid=host \
--privileged \
--name=kubelet \
--restart=on-failure:5 \
--memory={{ kubelet_memory_limit|regex_replace('Mi', 'M') }} \
--cpu-shares={{ kubelet_cpu_limit|regex_replace('m', '') }} \
-v /dev:/dev:rw \
-v /etc/cni:/etc/cni:ro \
-v /opt/cni:/opt/cni:ro \
-v /etc/ssl:/etc/ssl:ro \
-v /etc/resolv.conf:/etc/resolv.conf \
{% for dir in ssl_ca_dirs -%}
-v {{ dir }}:{{ dir }}:ro \
{% endfor -%}
-v /:/rootfs:ro,shared \
-v /sys:/sys:ro \
-v /var/lib/docker:/var/lib/docker:rw,shared \
-v /var/log:/var/log:rw,shared \
-v /var/lib/kubelet:/var/lib/kubelet:rw,shared \
-v /var/lib/cni:/var/lib/cni:rw,shared \
-v /var/run:/var/run:rw,shared \
-v /etc/kubernetes:/etc/kubernetes:ro \
-v /etc/os-release:/etc/os-release:ro \
{{ hyperkube_image_repo }}:{{ hyperkube_image_tag}} \
./hyperkube kubelet --containerized \
"$@"
ããã¯å€§äžå€«ã§ããïŒ ä»ã®èª°ããåæ§ã®ãã®ã䜿çšããŠããŸããïŒ
ç§ã¯ããŸãã«ãæ©ã話ããŸããã
Type Reason Age From Message [53/7752]
---- ------ ---- ---- -------
Normal Killing 4m kubelet, gke-delivery-platform-custom-pool-560b2b96-gcmb Killing container with id docker://filebeat:Need to kill Pod
æ®å¿ãªæ¹æ³ã§ãããç Žå£ããªããã°ãªããŸããã§ããã
⯠kks delete pod filebeat-x56v8 --force --grace-period 0
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "filebeat-x56v8" deleted
@Stonoã©ã®
AzureAKS管ç察象ã¯ã©ã¹ã¿ãŒã®1.9.6ã§ããã®åé¡ãçºçããŸãã
çŸæç¹ã§ãã®åé¿çã䜿çšããŠãã¹ã¿ãã¯ããŠãããã¹ãŠã®ããããéžæããŠåé€ããŸãïŒéçº/ã¹ã¯ã©ããã¯ã©ã¹ã¿ãŒã«ããããçµäºããã¹ã¯ã¹ãã§ããŠããŸãããïŒïŒ
kubectl get pods | awk '$3=="Terminating" {print "kubectl delete pod " $1 " --grace-period=0 --force"}' | xargs -0 bash -c
5æã®Azureã¯ã©ã¹ã¿ãŒãšAWSã¯ã©ã¹ã¿ãŒã®äž¡æ¹ã§ããã«ééããŸãã-åé¿çã¯MikeElliotã«ãã£ãŠæäŸãããŸãã
https://jira.onap.org/browse/OOM-946
ubuntu @ ip-10-0-0-22 ïŒã$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-76b8cd7b5-4r88h1 / 1å®è¡äž025d
kube-system kube-dns-5d7b4487c9-s4rsg3 / 3å®è¡äž025d
kube-system kubernetes-dashboard-f9577fffd-298r61 / 1å®è¡äž025d
kube-ã·ã¹ãã ç£èŠ-grafana-997796fcf-wtz7n1 / 1å®è¡äž025d
kube-ã·ã¹ãã ç£èŠ-influxdb-56fdcd96b-2phd21 / 1å®è¡äž025d
kube-system tiller-deploy-cc96d4f6b-jzqmz1 / 1å®è¡äž025d
onap dev-sms-857f6dbd87-pds580 / 1çµäº03h
onap dev-vfc-zte-sdnc-driver-5b6c7cbd6b-5vdvp0 / 1çµäº03h
ubuntu @ ip-10-0-0-22 ïŒã$ kubectl delete pod dev-vfc-zte-sdnc-driver-5b6c7cbd6b-5vdvp -n onap --grace-period = 0 --force
èŠåïŒå³æåé€ã¯ãå®è¡äžã®ãªãœãŒã¹ãçµäºããããšã®ç¢ºèªãåŸ
ã¡ãŸããã ãªãœãŒã¹ã¯ã¯ã©ã¹ã¿ãŒäžã§ç¡æéã«å®è¡ããç¶ããå¯èœæ§ããããŸãã
ããããdev-vfc-zte-sdnc-driver-5b6c7cbd6b-5vdvpããåé€ãããŸãã
ubuntu @ ip-10-0-0-22 ïŒã$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-76b8cd7b5-4r88h1 / 1å®è¡äž025d
kube-system kube-dns-5d7b4487c9-s4rsg3 / 3å®è¡äž025d
kube-system kubernetes-dashboard-f9577fffd-298r61 / 1å®è¡äž025d
kube-ã·ã¹ãã ç£èŠ-grafana-997796fcf-wtz7n1 / 1å®è¡äž025d
kube-ã·ã¹ãã ç£èŠ-influxdb-56fdcd96b-2phd21 / 1å®è¡äž025d
kube-system tiller-deploy-cc96d4f6b-jzqmz1 / 1å®è¡äž025d
onap dev-sms-857f6dbd87-pds580 / 1çµäº03h
ubuntu @ ip-10-0-0-22 ïŒã$ kubectl delete pod dev-sms-857f6dbd87-pds58 -n onap --grace-period = 0 --force
èŠåïŒå³æåé€ã¯ãå®è¡äžã®ãªãœãŒã¹ãçµäºããããšã®ç¢ºèªãåŸ
ã¡ãŸããã ãªãœãŒã¹ã¯ã¯ã©ã¹ã¿ãŒäžã§ç¡æéã«å®è¡ããç¶ããå¯èœæ§ããããŸãã
ããããdev-sms-857f6dbd87-pds58ããåé€ãããŸãã
ubuntu @ ip-10-0-0-22 ïŒã$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system heapster-76b8cd7b5-4r88h1 / 1å®è¡äž025d
kube-system kube-dns-5d7b4487c9-s4rsg3 / 3å®è¡äž025d
kube-system kubernetes-dashboard-f9577fffd-298r61 / 1å®è¡äž025d
kube-ã·ã¹ãã ç£èŠ-grafana-997796fcf-wtz7n1 / 1å®è¡äž025d
kube-ã·ã¹ãã ç£èŠ-influxdb-56fdcd96b-2phd21 / 1å®è¡äž025d
kube-system tiller-deploy-cc96d4f6b-jzqmz1 / 1å®è¡äž025d
ãããåãåé¡ã§ãããã©ããã¯ããããŸãããã1.9.3ãã10.10.1ã«ã¢ããã°ã¬ãŒãããŠãããã®åäœã«æ°ã¥ã
Apr 23 08:21:11 int-kube-01 kubelet[13018]: I0423 08:21:11.106779 13018 reconciler.go:181] operationExecutor.UnmountVolume started for volume "dev-static" (UniqueName: "kubernetes.io/glusterfs/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f-dev-static") pod "ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f" (UID: "ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f")
Apr 23 08:21:11 int-kube-01 kubelet[13018]: E0423 08:21:11.122027 13018 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/glusterfs/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f-dev-static\" (\"ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f\")" failed. No retries permitted until 2018-04-23 08:23:13.121821027 +1000 AEST m=+408681.605939042 (durationBeforeRetry 2m2s). Error: "UnmountVolume.TearDown failed for volume \"dev-static\" (UniqueName: \"kubernetes.io/glusterfs/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f-dev-static\") pod \"ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f\" (UID: \"ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f\") : Unmount failed: exit status 32\nUnmounting arguments: /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static\nOutput: umount: /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static: target is busy.\n (In some cases useful info about processes that use\n the device is found by lsof(8) or fuser(1))\n\n"
lsofã¯ãglusterfsããªã¥ãŒã ã®äžã®ãã£ã¬ã¯ããªããŸã 䜿çšäžã§ããããšãå®éã«ç€ºããŠããŸãã
glusterfs 71570 root 10u DIR 0,264 4096 9380607748984626555 /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static/subpathhere
glusterti 71570 71571 root 10u DIR 0,264 4096 9380607748984626555 /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static/subpathhere
glustersi 71570 71572 root 10u DIR 0,264 4096 9380607748984626555 /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static/subpathhere
glusterme 71570 71573 root 10u DIR 0,264 4096 9380607748984626555 /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static/subpathhere
glustersp 71570 71574 root 10u DIR 0,264 4096 9380607748984626555 /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static/subpathhere
glustersp 71570 71575 root 10u DIR 0,264 4096 9380607748984626555 /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static/subpathhere
glusterep 71570 71579 root 10u DIR 0,264 4096 9380607748984626555 /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static/subpathhere
glusterio 71570 71580 root 10u DIR 0,264 4096 9380607748984626555 /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static/subpathhere
glusterep 71570 71581 root 10u DIR 0,264 4096 9380607748984626555 /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static/subpathhere
glusterep 71570 71582 root 10u DIR 0,264 4096 9380607748984626555 /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static/subpathhere
glusterep 71570 71583 root 10u DIR 0,264 4096 9380607748984626555 /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static/subpathhere
glusterep 71570 71584 root 10u DIR 0,264 4096 9380607748984626555 /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static/subpathhere
glusterep 71570 71585 root 10u DIR 0,264 4096 9380607748984626555 /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static/subpathhere
glusterep 71570 71586 root 10u DIR 0,264 4096 9380607748984626555 /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static/subpathhere
glusterep 71570 71587 root 10u DIR 0,264 4096 9380607748984626555 /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static/subpathhere
glusterfu 71570 71592 root 10u DIR 0,264 4096 9380607748984626555 /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static/subpathhere
glusterfu 71570 71593 root 10u DIR 0,264 4096 9380607748984626555 /var/lib/kubelet/pods/ad8fabbe-4449-11e8-b21a-a2bfb3c62d0f/volumes/kubernetes.io~glusterfs/dev-static/subpathhere
ããã¯1.9.3ã§ã¯ãã¹ãŠåé¡ãªãã£ãã®ã§ããã®åé¡ã®ä¿®æ£ã«ãã£ãŠãŠãŒã¹ã±ãŒã¹ãå£ãããã®ããã§ã:(
@ ross-wãã®çœ²åã¯ä»ã®çœ²åãšã¯ç°ãªã£ãŠèŠããŸãã æ°ããå·ãéããŠããããã®ä»æ§ãå«ããŠããã ããŸããïŒ
ãããã®åé¡ã«é¢ããæŽæ°ã¯ãããŸããïŒ
ãã®å ŽåïŒKubernetes 1.9.7ãdocker 17.03ïŒãããŒããã¡ã¢ãªäžè¶³ã«ãªããããããåã¹ã±ãžã¥ãŒã«ãããåŸããããã¯çµäºç¶æ
ã«ãªããŸãã æçµçã«ãkubernetesããã·ã¥ããŒããšãããã€ã¡ã³ãã¿ãã«å€ãã®ãŽãŒã¹ãããããããã4/1ãããã®ãããã€ã¡ã³ãã確èªã§ããŸãã
kubeletãåèµ·åããããåå空éå
ã®ãã¹ãŠã®ãããã匷å¶çµäºããããšã¯åœ¹ç«ã¡ãŸãããããã¯éåžžã«è²§åŒ±ãªè§£æ±ºçã§ãã
@Adiqqããã¯Docker
ããŒãã®1ã€ã§journalctl -u kubelet -f
ãèŠãŠãã ããã ãã³ã³ããã殺ããŸãããã®ãããªã¡ãã»ãŒãžããããŸãã
ãããä¿®æ£ããããã«ãåããŒãã§dockerãåèµ·åããŸããã Dockerã®èµ·åäžã«ãå£ããç¶æ ã®ã³ã³ãããŒãã¯ãªãŒã³ã¢ãããããã®å€ããããããã¹ãŠåé€ããŸãã
æšæ¥1.9.7ã§ãããçºçããããããçµäºç¶æ
ã§ã¹ã¿ãã¯ãããã°ã«ããããã匷å¶çµäºããå¿
èŠããããã ãã ã£ãã®ã§ãåãé€ãã«ã¯--force --grace-period=0
ãå®è¡ããå¿
èŠããããŸããã
1.9.7-gke.0ã§ãããååŸããŸããã
1.9.6-gke.1ã§ã¯åé¡ã¯ãããŸããã§ããã
ãããã1.9.4ãš1.9.5ã§ãããæã£ãŠããŸãã
åããªããªã£ããããã«ã¯PVãåãä»ããããŠããŸãã
ããããåãããã€ãŸãã¯åé€ããŠãåãå¹æããããŸãã
åé¡ã®ããããŒãã§kubeletãåèµ·åããŠãæ©èœããŸããã§ããã kubeletãåèµ·åãããããŒãå
šäœãåèµ·åããå¿
èŠããããŸããã
ãã®éãPVã¯ãã§ã«ä»ã®å Žæã«ããŠã³ããããŠãããšè¡šç€ºãããŠãããããããããä»ã®ããŒãã§ã¹ã±ãžã¥ãŒã«ããããšã¯ã§ããŸããã§ããã
@Stono @ nodefactory-bkåé¡ã®ããããŒãã®kubeletãã°ãèŠãŠãåé¡ã瀺ããŠããå¯èœæ§ã®ãã詳现ãªãã°ããããã©ããã確èªã§ããŸããïŒ
cc @dashpole
1ã€ã®ã¢ããªãçµäºã§ã¹ã¿ãã¯ããŸããã
ããã¯1.9.7-gke.1ã«ãããŸã
ç§å¯ãç·šéãããkubectldescribeãããã¯æ¬¡ã®ãšããã§ãã
Name: sharespine-cloud-6b78cbfb8d-xcbh5
Namespace: shsp-cloud-dev
Node: gke-testing-std4-1-0f83e7c0-qrxg/10.132.0.4
Start Time: Tue, 22 May 2018 11:14:22 +0200
Labels: app=sharespine-cloud
pod-template-hash=2634769648
Annotations: <none>
Status: Terminating (expires Wed, 23 May 2018 10:02:01 +0200)
Termination Grace Period: 60s
IP: 10.40.7.29
Controlled By: ReplicaSet/sharespine-cloud-6b78cbfb8d
Containers:
sharespine-cloud:
Container ID: docker://4cf402b5dc3ea728fcbff87b57e0ec504093ea3cf7277f6ca83fde726a4bba48
Image: ...
Image ID: ...
Ports: 9000/TCP, 9500/TCP
State: Running
Started: Tue, 22 May 2018 11:16:36 +0200
Ready: False
Restart Count: 0
Limits:
memory: 1500M
Requests:
cpu: 500m
memory: 1024M
Liveness: http-get http://:9000/ delay=240s timeout=1s period=30s #success=1 #failure=3
Readiness: http-get http://:9000/ delay=30s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
sharespine-cloud-secrets Secret Optional: false
Environment:
APP_NAME: sharespine-cloud
APP_ENV: shsp-cloud-dev (v1:metadata.namespace)
JAVA_XMS: 128M
JAVA_XMX: 1024M
Mounts:
/home/app/sharespine-cloud-home/ from sharespine-cloud-home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-x7vzr (ro)
sharespine-cloud-elker:
Container ID: docker://88a5a2bfd6804b5f40534ecdb6953771ac3181cf12df407baa81a34a7215d142
Image: ...
Image ID: ...
Port: <none>
State: Running
Started: Tue, 22 May 2018 11:16:36 +0200
Ready: True
Restart Count: 0
Limits:
memory: 200Mi
Requests:
cpu: 10m
memory: 100Mi
Environment Variables from:
sharespine-cloud-secrets Secret Optional: false
Environment:
APP_NAME: sharespine-cloud
APP_ENV: shsp-cloud-dev (v1:metadata.namespace)
ELASTICSEARCH_LOGBACK_PATH: /home/app/sharespine-cloud-home/logs/stash/stash.json
ELASTICSEARCH_LOGBACK_INDEX: cloud-dev
Mounts:
/home/app/sharespine-cloud-home/ from sharespine-cloud-home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-x7vzr (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
sharespine-cloud-home:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: sharespine-cloud-home
ReadOnly: false
default-token-x7vzr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-x7vzr
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Killing 20m kubelet, gke-testing-std4-1-0f83e7c0-qrxg Killing container with id docker://sharespine-cloud-elker:Need to kill Pod
Normal Killing 20m kubelet, gke-testing-std4-1-0f83e7c0-qrxg Killing container with id docker://sharespine-cloud:Need to kill Pod
Warning FailedKillPod 18m kubelet, gke-testing-std4-1-0f83e7c0-qrxg error killing pod: failed to "KillPodSandbox" for "83d05e96-5da0-11e8-ba51-42010a840176" with KillPodSandboxError: "rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Warning FailedSync 1m (x53 over 16m) kubelet, gke-testing-std4-1-0f83e7c0-qrxg error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
ã°ãŒã°ã«ç»åã®gkeã§kubelet.logãèŠã€ããå Žæãããããªãã ç§ãæ·»ä»ããŠãããã®ãèŠã€ããŸããã
kube.log
kubectl -n shsp-cloud-dev delete pod sharespine-cloud-6b78cbfb8d-xcbh5 --force --grace-period 0
ããã殺ãããããåé€ããŸããã
ãã®åŸã¯é 調ã«ã¹ã¿ãŒãããŸãããããã€ãããå°ãæéãããããŸããã
念ã®ããã«èšã£ãŠãããŸãããããã¯ãã®ã¢ããªã§ã¯æ¯åçºçããããã§ã¯ãããŸããã
ãããã1/4åãããã ãšæããŸãã
k8s 1.9.6ã§ãããããããããšãkubeletãCephfsããŠã³ããã¢ã³ããŠã³ãã§ããªãå ŽåãããŒãäžã®ãã¹ãŠã®ãããã¯æ°žä¹ ã«çµäºãããŸãŸã«ãªããŸãã å埩ããããã«ããŒããåèµ·åããå¿ èŠããããŸããããkubeletãŸãã¯dockerã®åèµ·åã¯åœ¹ã«ç«ã¡ãŸããã§ããã
@tuminoidCephã®åé¡ã¯ç°ãªã£ãŠèãããŸãã æ°ããå·ãéããŠããã®ãããã®ãããã€ãã³ããškubeletãã°ãæäŸã§ããŸããïŒ
åèãŸã§ã«ãã¯ã©ã¹ã¿ãŒãïŒk8s v1.10.2ã«ïŒæŽæ°ããããšã§ããã®åé¡ã¯è§£æ¶ãããããã§ãã
æ·»ä»ã¯gkeã§ç§ã®ããã«ãããåçŸããŸã
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.2-gke.1", GitCommit:"75d2af854b1df023c7ce10a8795b85d3dd1f8d37", GitTreeState:"clean", BuildDate:"2018-05-10T17:23:18Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}
å®è¡ããŠããåé€ããŸãã 'nfs-client'ãåé€ããããŸãŸã«ãªããŸãã ãã®çç±ã¯ãããŒãã®ããŒãããŠã³ãã§ãããããµãŒããŒããæåã«åé€ãããŸãã
æåã«nfsãµãŒããŒãåé€ãããšãã®nfsã¢ã³ããŠã³ãã®åé¡ã«å¯Ÿãã@donbowmanã¯ãStorageClassãŸãã¯PVã§ããœãããããŠã³ããªãã·ã§ã³ãèšå®ã§ããŸãã
æ¹æ³ãããããŸãããïŒ PersistentVolumeClaimã§èšå®ã§ããŸãããããã§ã¯é©çšãããŸããã
StorageClassãããã«é©çšããããšã¯æããŸããïŒã€ãŸããnfsãµãŒããŒã®äžã®ãã£ã¹ã¯äžã«ãªããŸãïŒã
ãã®åé¡ã¯nfs-clientã«ãããŸãã
ç§ã¯äœãã足ããªãã®ã§ããïŒ
nfs PVã®å Žåã1.8以éã®mountOptionsãã£ãŒã«ããèšå®ããŠããœããããŠã³ããæå®ã§ããŸãã nfsããªã¥ãŒã ãåçã«ããããžã§ãã³ã°ããå Žåã¯ãStorageClass.mountOptionsã§èšå®ããããšãã§ããŸãã
ã¯ãããã ããNFSã䜿çšããŠããŠã³ããããŠããPVã§ã¯ãããŸããã
ããã¯ç§ã®NFSãµãŒããŒã³ã³ããããã®ãã®ã§ãã
åçããããžã§ãã³ã°ã¯ãããŸããã
ããã¯GoogleGCP + GKEã䜿çšããŠããŸãã PVCã¯ããããã¯IOã§ããPVãéžæããext4ãšããŠã³ã³ããã«ããŠã³ãããŠNFSã§åãšã¯ã¹ããŒãããŸãã
nfs-serverïŒããèªäœããããïŒããããŠã³ããããã³ã³ãããŒã®2çªç®ã®ã»ããã¯ãPVãšããŠèªèãããŸããã 圌ãã¯ããã以äžã®ãããªããªã¥ãŒã ãšããŠèŠãŠããŸãã
ãã®nfs-clientã«ããŠã³ãã®ãpvcãã衚瀺ãããæ¹æ³ãããããªããããããŠã³ããªãã·ã§ã³ãèšå®ã§ããŸããã ãŸããStorageClassãšããŠè¡šç€ºããããšãã§ããŸããã
ç§ã¯äœãã足ããŸãããïŒ
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nfs-client
labels:
app: nfs-client
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client
spec:
containers:
- name: nfs-client
image: busybox:latest
imagePullPolicy: IfNotPresent
command: ["sleep", "3600"]
volumeMounts:
- name: nfs
mountPath: /registry
volumes:
- name: nfs
nfs:
server: nfs-server.default.svc.cluster.local
path: /
nfsããŠã³ãã䜿çšãã2çªç®ã®ã³ã³ããã»ããã®@donbowmanã§ã¯ã
ãã®ãããªãã®ïŒ
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
storageClassName: ""
capacity:
# Capacity doesn't actually matter for nfs
storage: 500G
accessModes:
- ReadWriteMany
mountOptions:
- soft
nfs:
server: nfs-server.default.svc.cluster.local
path: /
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-claim
spec:
# It's necessary to specify "" as the storageClassName
# so that the default storage class won't be used
storageClassName: ""
volumeName: nfs-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500G
ããããšãïŒ ãã®ãããããã¯æ©èœããŸãããïŒããæå³ã§ã¯ãœããããŠã³ãã«ãªããŸããïŒãåé¡ã¯ä¿®æ£ãããŸããã
ããŠã³ãïŒããŒãã§èŠ³å¯ãããïŒã¯ãœããã«ãªããŸããïŒ
nfs-server.default.svc.cluster.local:/ on /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/pods/cbeda204-638d-11e8-9758-42010aa200b4/volumes/kubernetes.io~nfs/nfs-pv type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.162.0.2,local_lock=none,addr=10.19.241.155)
ãããããã¹ãŠãåé€ããŠããnfs-clientãçµäºç¶æ ã§æ°žä¹ ã«ã¹ã¿ãã¯ããŸãã
æ·»ä»ãããŠããã®ã¯ç§ã䜿çšããyamlã§ãã ç§ã¯ãäœæããè¡ããããã衚瀺ãããã®ãåŸ ã¡ãã¯ã©ã€ã¢ã³ãã«ããŠã³ããããããã¡ã€ã«ã®èªã¿åã/æžã蟌ã¿ãå¯èœã§ããããšã確èªããŠããããåé€ããè¡ããŸããã
nfs-serverãããã¯åé€ãããŸãããnfs-clientã¯åé€ãããŸããã
ããããèŠããšãããŠã³ãã¯æ®ã£ãŠããŸãã
# umount -f /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/pods/cbeda204-638d-11e8-9758-42010aa200b4/volumes/kubernetes.io~nfs/nfs-pv
umount: /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/pods/cbeda204-638d-11e8-9758-42010aa200b4/volumes/kubernetes.io~nfs/nfs-pv: target is busy
(In some cases useful info about processes that
use the device is found by lsof(8) or fuser(1).)
@donbowmanããããã¿ãŸãããç§ã¯ãœãããªãã·ã§ã³ã«ã€ããŠééã£ãŠããŸããã ãœãããªãã·ã§ã³ã¯ããµãŒããŒã«ã¢ã¯ã»ã¹ã§ããªããšãã«ãã¡ã€ã«ã·ã¹ãã åŒã³åºãããã³ã°ããã®ãé²ãã ãã§ãããå®éã«ã¯nfsããªã¥ãŒã ã®ããŠã³ãã解é€ããã®ã«åœ¹ç«ã¡ãŸããã ãã®ããã«ã¯åŒ·å¶çãªã¢ã³ããŠã³ããè¡ãå¿ èŠããããŸãããçŸåšã¯ééããæ¹æ³ããããŸããã ä»ã®ãšããããããã®ããŠã³ããæåã§ã¯ãªãŒã³ã¢ããããããããæ£ããé åºã§åé€ããå¿ èŠããããŸãïŒæåã«nfsã¯ã©ã€ã¢ã³ãã次ã«nfsãµãŒããŒïŒã
timeo = 30ãšintrãè¿œå ããããšããŸããããåãåé¡ãçºçããŸãã
ããã«ãããããŒããããã¯ãããããŒãã«ãã°ã€ã³ããŠãåºã«ãªãããŠã³ãã§umount -f -lãå®è¡ããå¿
èŠããããŸãããã®åŸããããã§kubectl delete --force --grace-period0ãå®è¡ã§ããŸãã
ããã¯ãããã«ä»£ãã£ãŠããŠã³ããããŠãããããåé€æã«èªåçã«ã¢ã³ããŠã³ãïŒãŸãã¯ã¿ã€ã ã¢ãŠãåŸã«åŒ·å¶ã¢ã³ããŠã³ãïŒãããå¯èœæ§ãããããã§ãã
ç§ã¯ãã®ãããªããããããããæã£ãŠããã®ã§ããã¹ãŠã®çµäºããããã¯ãªãŒã³ã¢ããããã³ãã³ããèãåºãå¿ èŠããããŸããã
kubectl get pods -o json | jq -c '.items[] | select(.metadata.deletionTimestamp) | .metadata.name' | xargs -I '{}' kubectl delete pod --force --grace-period 0 '{}'
ã°ãŒã°ã«ã®æ°ãããã¡ã€ã«ã¹ãã¢ã§ãåãåé¡ãçºçãããšæããŸãã
@donbowman iircãåé¡ã¯ãnfsã¯ã©ã€ã¢ã³ããããã®åã«nfsãµãŒããŒããããçµäºããŠããããã§ãã ãã¡ã€ã«ã¹ãã¢ã䜿çšããå ŽåãnfsãµãŒããŒããã¹ãããããã®ãããã¯äžèŠã«ãªããŸãããããã£ãŠããã¡ã€ã¢ã¹ãã¢ã®ã€ã³ã¹ã¿ã³ã¹å šäœãåé€ããªãéãããã®åé¡ã¯çºçããªãã¯ãã§ãã
ãã¡ã€ã«ã¹ãã¢ã調æŽããŠããå Žåãåãåé¡ã¯çºçããŸãããïŒ ããšãã°ãç¹å®ã®kubernetesãããã€ã¡ã³ãçšã«èµ·åããæåŸã«åæ¢ããå Žåãé åºã¯ä¿èšŒãããŸããã
ããããåé¡ã¯é åºã ãã§ã¯ãªããnfsã¯ã©ã€ã¢ã³ããããã®åé€ã¯ãŸã£ããã¢ã³ããŠã³ãããããããŒãã«ããŠã³ããã¶ãäžãã£ãŠããã ãã ãšæããŸãã ãããã£ãŠããã¡ã€ã«ã¹ãã¢/ãµãŒããŒãååšãããã©ããã«é¢ä¿ãªããã¶ãäžãã£ãŠããããŠã³ãããããŸãã
ããããçµäºãããšãããªã¥ãŒã ãã¢ã³ããŠã³ãããŸãïŒãµãŒããŒããŸã ããã«ãããšä»®å®ããŸãïŒã ãµãŒããŒãååšããŠããŠãããŠã³ããã¶ãäžãã£ãŠããå Žåã¯ãããããã°ã§ãã
PVCããã³PVã§åçããããžã§ãã³ã°ã䜿çšããå ŽåãPVCãåç §ãããã¹ãŠã®ããããããã䜿çšãããŸã§ãPVCïŒããã³åºç€ãšãªãã¹ãã¬ãŒãžïŒãåé€ããããšã¯ã§ããŸããã ããããžã§ãã³ã°ãèªåã§èª¿æŽããå Žåã¯ããã¹ãŠã®ãããã§ãµãŒããŒã®äœ¿çšãå®äºãããŸã§ãµãŒããŒãåé€ããªãããã«ããå¿ èŠããããŸãã
å€åããã¯å¯èœãªåé¿çã§ãïŒïŒ65936
匷å¶åé€ã¯kubectl delete po $pod --grace-period=0 --force
ã --now
ãã©ã°ãæ©èœããŠããŸããã§ããã ïŒ65936ã«ã€ããŠã¯ããããããŸãããã Unknown
ç¶æ
ãçºçãããšãã«ããŒãã匷å¶çµäºããããããŸããã
1.10.5ã§åãåé¡ãçºçããŠããïŒããã€ã¹ããããžãŒãã§ããããã«ãããå
ã®ãã¡ã€ã«ãã¢ã³ããŠã³ãã§ããªããããããããçµäºãããŸãŸã«ãªãïŒã ç§ã«ãšã£ãŠ--grace-period=0 --force
ãããšãããŠã³ããã€ã³ãã¯åŒãç¶ãååšããŸãã æçµçã«ã¯90000ãè¶
ããããŠã³ããã€ã³ãã«ãªããã¯ã©ã¹ã¿ãŒã®é床ã倧å¹
ã«äœäžããŸããã ããã§ã®åé¿çã¯ããããã®ãã©ã«ããŒã§æ€çŽ¢ãå®è¡ãããããã®ãã¡ã€ã«ãååž°çã«ã¢ã³ããŠã³ãããŠãããããããã©ã«ããŒãååž°çã«åé€ããããšã§ãã
ç§ã®å Žåããµããã¹ã䜿çšããŠconfigmapãæ¢åã®ãã¡ã€ã«ãå«ãæ¢åã®ãã©ã«ããŒã«ããŠã³ãããæ¢åã®ãã¡ã€ã«ã®1ã€ãäžæžãããŸãã ããã¯ã1.8.6ã§ã¯åé¡ãªãæ©èœããŠããŸããã
å
ã®ãã¹ã¿ãŒã«ã¯ãããããæ°æéãçµäºããããŸãŸã§ãããšèšèŒãããŠããŸããç§ã®å Žåã¯æ°æ¥ã§ãã æåã®åé¿çãå®è¡ããå Žåãé€ããŠãæçµçã«ããããã¯ãªãŒã³ã¢ãããããã®ãèŠãããšããããŸããã
ãã°ã¢ã°ãªã²ãŒã¿ãŒïŒfluentdãšåæ§ïŒãåå ã§åãåé¡ãçºçãã /var/lib/docker/containers
ãã©ã«ããŒãããŠã³ãããããããã«ã¯å€ãã®ããŠã³ãããããŸãã
shm 64.0M 0 64.0M 0% /var/lib/docker/containers/6691cb9460df75579915fd881342931b98b4bfb7a6fbb0733cc6132d7c17710c/shm
shm 64.0M 0 64.0M 0% /var/lib/docker/containers/4cbbdf53ee5122565c6e118a049c93543dcc93bfd586a3456ff4ca98d59810a3/shm
shm 64.0M 0 64.0M 0% /var/lib/docker/containers/b2968b63a7a1f673577e5ada5f2cda50e1203934467b7c6573e21b341d80810a/shm
shm 64.0M 0 64.0M 0% /var/lib/docker/containers/4d54a4eabed68b136b0aa3d385093e4a32424d18a08c7f39f5179440166de95f/shm
shm 64.0M 0 64.0M 0% /var/lib/docker/containers/0e5487465abc2857446940902d9b9754b3447e587eefc2436b2bb78fd4d5ce4d/shm
shm 64.0M 0 64.0M 0% /var/lib/docker/containers/c73ed0942d77bf43f9ba016728834c47339793f9f1f31c4e566d73be492cf859/shm
shm 64.0M 0 64.0M 0% /var/lib/docker/containers/f9ab13f7f145b44beccc40c158287c4cfcc9dc465850f30d691961a2cabcfc14/shm
shm 64.0M 0 64.0M 0% /var/lib/docker/containers/aa449af555702d04f95fed04d09a3f1d5ae38d677484fc6cc9fc6d4b42182820/shm
shm 64.0M 0 64.0M 0% /var/lib/docker/containers/f6608e507348b43ade3faa05d0a11b674c29f2038308f138174e8b7b8233633f/shm
ç§ã®å Žåãäžéšã®ãããã¯kubernetesã§é©åã«åé€ã§ããŸãããäžéšã¯ãçµäºãã¹ããŒã¿ã¹ã®ãŸãŸã«ãªããŸãã
https://github.com/kubernetes/kubernetes/issues/45688ã«é¢é£ããŠããå¯èœæ§ããããŸãïŒ
ã·ãŒã¯ã¬ããããªãããã«ããããçµäºããªããšããåé¡ããããŸããã ãã®åå空éã«ãã®ç§å¯ãäœæããåŸããã¹ãŠãæ£åžžã«æ»ããŸããã
ã¹ã¿ãã¯ãããããã次ã®ããã«åé€ããŸããã
user<strong i="6">@laptop</strong>:~$ kubectl -n storage get pod
NAME READY STATUS RESTARTS AGE
minio-65b869c776-47hql 0/1 Terminating 5 1d
minio-65b869c776-bppl6 0/1 Terminating 33 1d
minio-778f4665cd-btnf5 1/1 Running 0 1h
sftp-775b578d9b-pqk5x 1/1 Running 0 28m
user<strong i="7">@laptop</strong>:~$ kubectl -n storage delete pod minio-65b869c776-47hql --grace-period 0 --force
pod "minio-65b869c776-47hql" deleted
user<strong i="8">@laptop</strong>:~$ kubectl -n storage delete pod minio-65b869c776-bppl6 --grace-period 0 --force
pod "minio-65b869c776-bppl6" deleted
user<strong i="9">@laptop</strong>:~$ kubectl -n storage get pod
NAME READY STATUS RESTARTS AGE
minio-778f4665cd-btnf5 1/1 Running 0 2h
sftp-775b578d9b-pqk5x 1/1 Running 0 30m
user<strong i="10">@laptop</strong>:~$
AzureACSã§å®è¡ãããŠããåæ§ã®åé¡ãçºçããŸããã
10:12 $ kubectl describe pod -n xxx triggerpipeline-3737304981-nx85k
Name: triggerpipeline-3737304981-nx85k
Namespace: xxx
Node: k8s-agent-d7584a3a-2/10.240.0.6
Start Time: Wed, 27 Jun 2018 15:33:48 +0200
Labels: app=triggerpipeline
pod-template-hash=3737304981
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"xxx","name":"triggerpipeline-3737304981","uid":"b91320ff-7a0e-11e8-9e7...
Status: Terminating (expires Fri, 27 Jul 2018 09:00:35 +0200)
Termination Grace Period: 0s
IP:
Controlled By: ReplicaSet/triggerpipeline-3737304981
Containers:
alpine:
Container ID: docker://8443c7478dfe1a57a891b455366ca007fe00415178191a54b0199d246ccbd566
Image: alpine
Image ID: docker-pullable://alpine<strong i="6">@sha256</strong>:e1871801d30885a610511c867de0d6baca7ed4e6a2573d506bbec7fd3b03873f
Port: <none>
Command:
sh
Args:
-c
apk add --no-cache curl && echo "0 */4 * * * curl -v --trace-time http://myapi:80/api/v1/pipeline/start " | crontab - && crond -f
State: Terminated
Exit Code: 0
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Mon, 01 Jan 0001 00:00:00 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-p9qtw (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-p9qtw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-p9qtw
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events: <none>
--now
ããç¶äºæéãèšå®ããŠã¿ãŸããã äŸãã°
09:00 $ kubectl delete pod -n xxx triggerpipeline-3737304981-nx85k --force --grace-period=0
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "triggerpipeline-3737304981-nx85k" deleted
ããã§ãããããã¶ãäžãã£ãŠããããã察å¿ãããããã€ã¡ã³ããã¹ã¿ãã¯ããŸãã
ãŸãããããã€ãã³ãã§ã®ãããã®ããããã殺ãå¿ èŠãããããšããã¡ãã»ãŒãžã«ãæ©ãŸãããŠããŸãã ã¡ãªã¿ã«ããã¯ã©ãããæå³ã§ããïŒ _Kubernetes_ã¯ãããã匷å¶çµäºããå¿ èŠããããšæããŠããŸããããããšã_I_ã¯ãããã匷å¶çµäºããå¿ èŠããããŸããïŒ
ããã¯æ°æ¥åã«ç§ã«èµ·ãããç§ã¯åé€ããããããŠãããããã®ãŸãŸã«ããŸããã ãããŠä»æ¥ãããã¯å§¿ãæ¶ãããããŠåé€ãããããã§ãã
ã¡ããã©ä»ç§ã«èµ·ãã£ãã --force--nowãœãªã¥ãŒã·ã§ã³ã¯ç§ã«ã¯æ©èœããŸããã§ããã çãããkubeletãã°ã«æ¬¡ã®è¡ãèŠã€ãããŸãã
8æ6æ¥15ïŒ25ïŒ37kube-minion-1 kubelet [2778]ïŒW0806 15ïŒ25ïŒ37.986549 2778 docker_sandbox.goïŒ263] NetworkPlugin cniãããããbackend-foos-227474871-gzhw0_defaultãã®ã¹ããŒã¿ã¹ããã¯ã§å€±æããŸããïŒäºæããªãã³ãã³ãåºånsenterïŒéãããšãã§ããŸããïŒãã®ãããªãã¡ã€ã«ãŸãã¯ãã£ã¬ã¯ããªã¯ãããŸãã
ãã®ããã次ã®åé¡ãèŠã€ãããŸããã
https://github.com/openshift/origin/issues/15802
ç§ã¯openshiftã§ã¯ãªãOpenstackã䜿çšããŠããã®ã§ãé¢é£ããŠããå¯èœæ§ããããšæããŸããã Dockerãåèµ·åããããã«ã¢ããã€ã¹ããŸããã
dockerãåèµ·åãããšããçµäºãã§ã¹ã¿ãã¯ããŠããããããæ¶ããŸããã
ããã¯åé¿çã«ãããªãããšã¯ããã£ãŠããŸããããããä¿®æ£ããããã«åå3æã«ç®ãèŠãŸãããšã¯ãããŸããã
ããã䜿ãã¹ãã ãšèšã£ãŠããããã§ã¯ãããŸããããããã¯äœäººãã®äººã
ãå©ãããããããŸããã
ã¹ãªãŒãã¯ããããã®terminationGracePeriodSecondsãïŒ30ç§ïŒã«èšå®ãããŠãããã®ã§ãã ããããé·ãåç¶ããŠããå Žåããã®cronãžã§ãã¯--force --grace-period = 0ã«ãªããå®å šã«åŒ·å¶çµäºããŸã
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: stuckpod-restart
spec:
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 5
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: stuckpod-restart
image: devth/helm:v2.9.1
args:
- /bin/sh
- -c
- echo "$(date) Job stuckpod-restart Starting"; kubectl get pods --all-namespaces=true | awk '$3=="Terminating" {print "sleep 30; echo "$(date) Killing pod $1"; kubectl delete pod " $1 " --grace-period=0 --force"}'; echo "$(date) Job stuckpod-restart Complete";
restartPolicy: OnFailure
Kubernetesv1.10.2ã§ãåããšã©ãŒãçºçããŸãã ããããç¡æéã«çµäºããåé¡ã®ããŒãã®kubeletãç¹°ãè¿ããã°ã«èšé²ãããŸãã
Aug 21 13:25:55 node-09 kubelet[164855]: E0821 13:25:55.149132
164855 nestedpendingoperations.go:267]
Operation for "\"kubernetes.io/configmap/b838409a-a49e-11e8-bdf7-000f533063c0-configmap\"
(\"b838409a-a49e-11e8-bdf7-000f533063c0\")" failed. No retries permitted until 2018-08-21
13:27:57.149071465 +0000 UTC m=+1276998.311766147 (durationBeforeRetry 2m2s). Error: "error
cleaning subPath mounts for volume \"configmap\" (UniqueName:
\"kubernetes.io/configmap/b838409a-a49e-11e8-bdf7-000f533063c0-configmap\") pod
\"b838409a-a49e-11e8-bdf7-000f533063c0\" (UID: \"b838409a-a49e-11e8-bdf7-000f533063c0\")
: error deleting /var/lib/kubelet/pods/b838409a-a49e-11e8-bdf7-000f533063c0/volume-
subpaths/configmap/pod-master/2: remove /var/lib/kubelet/pods/b838409a-a49e-11e8-bdf7-
000f533063c0/volume-subpaths/configmap/pod-master/2: device or resource busy"
åé¡ã®ãµããã¹ããªã¥ãŒã ãæå¥ãªãã«æåã§ã¢ã³ããŠã³ãã§ããŸãïŒLinuxã¯ããžãŒã§ãããšã¯æããŠãããŸããïŒã ããã«ãããkubeletããšã©ãŒã¡ãã»ãŒãžããã°ã«èšé²ããªããªããŸãã ãã ãããããã¯ãŸã çµäºç¶æ ã§è¡šç€ºãããŠãããããããã¯Kubernetesã«ã¯ãªãŒã³ã¢ãããç¶è¡ããããã«ä¿ããŸããã ãããã¯ãªãŒã³ã¢ããããããã«Dockerãå®æçã«åèµ·åããããšã¯ãã³ã³ãããŒã®å®è¡ãäžæããããããå®éã«ã¯åãå ¥ãããã解決çã§ã¯ãããŸããã
ãŸããã³ã³ããèªäœãdocker ps -a
ããåé€ãããååšãããšãã蚌æ ããªãããããããå®éã«Dockerã®åé¡ã§ãããã©ããã¯ããããŸããã DockerããŒãžã§ã³17.03.2-ceã䜿çšããŠããŸãã
æŽæ°ïŒã·ã³ããªãã¯ãªã³ã¯ã䜿çšããŠãkubeletã«ãŒããã£ã¬ã¯ããªãOS以å€ã®ããªã¥ãŒã ã«ãªãã€ã¬ã¯ãããããã«ããŒããæ§æããŸããïŒ /var/lib/kubelet
ã¯ãå¥ã®ããªã¥ãŒã äžã®å¥ã®ãã£ã¬ã¯ããªãæãã·ã³ããªãã¯ãªã³ã¯ã§ããïŒã --root-dir
ãkubeletã«æž¡ãããã«åæ§æããŠãã·ã³ããªãã¯ãªã³ã¯ã§ã¯ãªãçŽæ¥ç®çã®ãã£ã¬ã¯ããªã«ç§»åããkubeletãåèµ·åãããšãããªã¥ãŒã ããŠã³ããã¯ãªãŒã³ã¢ãããããã¹ã¿ãã¯ããŠããããããã¯ãªã¢ãããŸããã Dockerã®åèµ·åãå¿
èŠãšããã«çµäºããŸãã
minikubeã§ããã€ãã®ããããããŒã«ã«ã§å®è¡ããŠãããšãã«ãä»æ¥åããŠãã®åé¡ãçµéšããŸããã
configmap / secretãããªã¥ãŒã ãšããŠããŠã³ããããŠããªãã£ãããããããã®æãTerminating
ã¹ã¿ãã¯ããŠããŸããã äžèšã«æçš¿ãããææ¡/åé¿ç/解決çã¯ããã以å€ã¯æ©èœããŸããã§ããã
ãã ãã泚ç®ã«å€ãããšæãããšã®1ã€ã¯æ¬¡ã®ãšããã§ãã
kubectl get pods
ãå®è¡ãããšã Terminating
ã¹ããŒã¿ã¹ã®ãããã®ãªã¹ãã衚瀺ãããŸãããdocker ps | grep -i {{pod_name}}
ãããã©ãã«ããããããã®ã©ããTerminating
ã§èŠããããããªç¶æ
kubectl get pods
minikube VMã§å®è¡ãããŠããããdocker ps
ãTerminating
ç¶æ
ã§ã¹ã¿ãã¯ãããããã®ãªã¹ããè¿ãããšãæåŸ
ããŠããŸããããå®éã«ã¯ã©ããå®è¡ãããŠããŸããkubectl get pods
ãã
ãã®åé¡ã¯ã4ã€ã®å±éã§çºçããŸããã 次ã«ããã¹ãŠã®ããŠã³ãã§ãããŒã«ã«ããªã¥ãŒã ãããããã¹ããã¹ãã«åãæ¿ããŸããããããã¯ãªããªããŸããã
ã·ãŒã¯ã¬ããããªãããã«ããããçµäºããªããšããåé¡ããããŸããã ãã®åå空éã«ãã®ç§å¯ãäœæããåŸããã¹ãŠãæ£åžžã«æ»ããŸããã
åå空éããçµäºãç¶æ ã®å Žåãã©ã®ããã«ããŠåå空éã«ã·ãŒã¯ã¬ãããäœæããŸããïŒ
kubectl delete --all pods --namespace = xxxxx --force --grace-period = 0
ç§ã®ããã«åããŸãã
ã--grace-period = 0ããå¿ããªãã§ãã ããã éèŠã§ã
kubectlã¯ããèŠåïŒå³æåé€ã¯ãå®è¡äžã®ãªãœãŒã¹ãçµäºããããšã®ç¢ºèªãåŸ
ã¡ãŸããããªãœãŒã¹ã¯ãã¯ã©ã¹ã¿ãŒäžã§ç¡æéã«å®è¡ããç¶ããå¯èœæ§ããããŸããããšèŠåããŸããã --force --grace-period=0
ã䜿çšããå Žåã
ãããæ¬åœã«èµ·ãããã©ãã誰ãã«æããŠããããŸããïŒ
å®éãããããåé€ãããšãäœããã®çç±ã§åé€ãé
ããå ŽåããããŸãã
ãŸãããã©ã°ã--force --grace-period = 0ããæå®ããŠãkubectldeleteããå®è¡ãããšã
ãªãœãŒã¹ãªããžã§ã¯ãã¯ããã«åé€ãããŸãã
ããããããã«åé€ããããã©ããã確èªããã®ã«åœ¹ç«ã¡ãŸããïŒ
ããã¯èŠåã¡ãã»ãŒãžãå®éã«äžæ£ç¢ºã§ããããšãæå³ããŸãããïŒ
@ windoze ã-force --grace-period = 0ãªãã·ã§ã³ãæå®ãããšããããAPIãªããžã§ã¯ããAPIãµãŒããŒããããã«åé€ãããããšãæå³ããŸãã Node kubeletã¯ãããªã¥ãŒã ããŠã³ãã®ã¯ãªãŒã³ã¢ãããšã³ã³ããã®åŒ·å¶çµäºãæ åœããŸãã kubeletãå®è¡ãããŠããªããããããã®ã¯ãªãŒã³ã¢ããäžã«åé¡ãçºçããå Žåã¯ãã³ã³ãããŒããŸã å®è¡ãããŠããå¯èœæ§ããããŸãã ãã ããKubeletã¯ãå¯èœãªéãããããã¯ãªãŒã³ã¢ããããããšãç¶ããå¿ èŠããããŸãã
ããã§ããkubeletã誀åäœããŠããå¯èœæ§ããããããåé€ã«æ°žé ã«æéããããå¯èœæ§ãããããšãæå³ããŸããïŒ
ããããåé€ãããŠããããšã確èªããæ¹æ³ã¯ãããŸããïŒ
ã¯ã©ã¹ã¿ãŒã§å®è¡ãããŠãã巚倧ãªããããããã€ãããããããã®2ã€ã®ã€ã³ã¹ã¿ã³ã¹ãå®è¡ããã®ã«ååãªã¡ã¢ãªããã¹ãŠã®ããŒãã«ãªãããã質åããŠããŸãã
åé€ã«å€±æããå ŽåãããŒãã¯äœ¿çšã§ããªããªãããã®åé¡ãè€æ°åçºçããå Žåãæçµçã«ãã®ããããå®è¡ã§ããããŒãããªããªãããããµãŒãã¹ã¯å®å
šã«ããŠã³ããŸãã
æãªããã®Dockerç°å¢ã§ã¯ã kill -9
ãªã©ã§ãããã匷å¶çã«åŒ·å¶çµäºã§ããŸãããk8sã«ã¯ãã®ãããªæ©èœããªãããã§ãã
@windozeãããã®åé€ãé »ç¹ã«å€±æããçç±ãç¥ã£ãŠããŸããïŒ kubeletãå®è¡ãããŠããªãããkubeletãã³ã³ãããŒã匷å¶çµäºããããšãããããšã©ãŒãçºçããŠå€±æããããšãåå ã§ããïŒ
ãã®ãããªç¶æ³ã¯ãæ°ãæåã«ç§ã®ã¯ã©ã¹ã¿ãŒã§æ°åçºçããkubeletã¯å®è¡ãããŠããŸããããdockerããŒã¢ã³ã«åé¡ãããããã§ããšã©ãŒãã°ã衚瀺ãããã«ã¹ã¿ãã¯ããŸããã
ç§ã®è§£æ±ºçã¯ãããŒãã«ãã°ã€ã³ããŠã³ã³ããããã»ã¹ã匷å¶çµäºããdockerããŒã¢ã³ãåèµ·åããããšã§ããã
ããã€ãã®ã¢ããã°ã¬ãŒãã®åŸãåé¡ã¯ãªããªããäºåºŠãšçºçããŸããã§ããã
kubectl delete pods <podname> --force --grace-period=0
ã¯ç§ã®ããã«åããïŒ
@ shinebayar-gã --force
ã®åé¡ã¯ãã³ã³ãããå®è¡ãç¶ç¶ããããšãæå³ããå¯èœæ§ãããããšã§ãã Kubernetesã«ãã®ãããã®ã³ã³ãããå¿ããããã«æ瀺ããã ãã§ãã ããè¯ã解決çã¯ãããããå®è¡ããŠããVMã«SSHã§æ¥ç¶ããDockerã§äœãèµ·ãã£ãŠãããã調æ»ããããšã§ãã docker kill
ã䜿çšããŠã³ã³ãããæåã§åŒ·å¶çµäºããæåããå Žåã¯ãéåžžã©ããããããåé€ããŠã¿ãŠãã ããã
@agolomoodysaadaãããããã¯çã«ããªã£ãŠããŸãã 説æããããšãã ã ããç§ã¯å®éã®ã³ã³ãããæ¬åœã«åé€ãããŠããã®ãæ£ãããªãã®ãæ¬åœã«ããããŸãããïŒ
ããã§ã2018幎ã®çµããã§ããkube 1.12ãåºãŠããŸãããããŠ...ããªãã¯ãŸã ãããã®ã¹ã¿ãã¯ã«åé¡ããããŸããïŒ
åãåé¡ããããŸãã--force--grace-period = 0ãŸãã¯--force--nowãæ©èœããŸããããã°ã¯æ¬¡ã®ãšããã§ãã
root @ r15-c70-b03-master01 ïŒãïŒkubectl -n infra-lmat get pod node-exporter-zbfpx
NAME READY STATUS RESTARTS AGE
node-exporter-zbfpx0 / 1çµäº04d
root @ r15-c70-b03-master01 ïŒãïŒkubectl -n infra-lmat delete pod node-exporter-zbfpx --grace-period = 0 --force
èŠåïŒå³æåé€ã¯ãå®è¡äžã®ãªãœãŒã¹ãçµäºããããšã®ç¢ºèªãåŸ
ã¡ãŸããã ãªãœãŒã¹ã¯ã¯ã©ã¹ã¿ãŒäžã§ç¡æéã«å®è¡ããç¶ããå¯èœæ§ããããŸãã
ããããnode-exporter-zbfpxããåé€ãããŸãã
root @ r15-c70-b03-master01 ïŒãïŒkubectl -n infra-lmat get pod node-exporter-zbfpx
NAME READY STATUS RESTARTS AGE
node-exporter-zbfpx0 / 1çµäº04d
root @ r15-c70-b03-master01 ïŒãïŒkubectl -n infra-lmat delete pod node-exporter-zbfpx --now --force
ããããnode-exporter-zbfpxããåé€ãããŸãã
root @ r15-c70-b03-master01 ïŒãïŒkubectl -n infra-lmat get pod node-exporter-zbfpx
NAME READY STATUS RESTARTS AGE
node-exporter-zbfpx0 / 1çµäº04d
root @ r15-c70-b03-master01 ïŒãïŒ
ããããç·šéããŠã¡ã¿ããŒã¿ã®ãã¡ã€ãã©ã€ã¶ãŒã»ã¯ã·ã§ã³ãåé€ããããšããŸãããã倱æããŸããã
macOSäžã®kubectl1.13alphaãšDockerfor Desktopã䜿çšãããšãããã100ïŒ åçŸå¯èœãªæ¹æ³ïŒåããªãœãŒã¹å®çŸ©ïŒã§åŒãç¶ã確èªã§ããŸãã åçŸå¯èœãšã¯ããããä¿®æ£ããå¯äžã®æ¹æ³ã¯Macçšã®Dockerãåºè·æèšå®ã«ãªã»ããããããšã§ããããã«æãããåããªãœãŒã¹ïŒãããã€ã¡ã³ãã¹ã¯ãªããïŒã䜿çšããŠã¯ã©ã¹ã¿ãŒãåã»ããã¢ãããããšãåãã¯ãªãŒã³ã¢ããã¹ã¯ãªããã倱æããããšãæå³ããŸãã
ãªããããé¢é£ããã®ãããããŸããããç§ã®ã¯ãªãŒã³ã¢ããã¹ã¯ãªããã¯æ¬¡ã®ããã«ãªããŸãã
#!/usr/bin/env bash
set -e
function usage() {
echo "Usage: $0 <containers|envs|volumes|all>"
}
if [ "$1" = "--help" ] || [ "$1" = "-h" ] || [ "$1" = "help" ]; then
echo "$(usage)"
exit 0
fi
if [ $# -lt 1 ] || [ $# -gt 1 ]; then
>&2 echo "$(usage)"
exit 1
fi
MODE=$1
function join_with {
local IFS="$1"
shift
echo "$*"
}
resources=()
if [ "$MODE" = "containers" ] || [ "$MODE" = "all" ]; then
resources+=(daemonsets replicasets statefulsets services deployments pods rc)
fi
if [ "$MODE" = "envs" ] || [ "$MODE" = "all" ]; then
resources+=(configmaps secrets)
fi
if [ "$MODE" = "volumes" ] || [ "$MODE" = "all" ]; then
resources+=(persistentvolumeclaims persistentvolumes)
fi
kubectl delete $(join_with , "${resources[@]}") --all
ã¯ã©ã¹ã¿ãŒã¯ããŒã«ã«ã§å®è¡ãããŠãããããDockerã§å®è¡ãããŠããã³ã³ãããŒããªãããšã確èªã§ããŸãããããã®çµäºæã«ãã³ã°ã¢ããããŠããã®ã¯ãkubectlã ãã§ãã ããããdescribe
ãããšãã¹ããŒã¿ã¹ã¯Status: Terminating (lasts <invalid>)
ãšããŠè¡šç€ºãããŸãã
ããäžåºŠç§ã«èµ·ãã£ãã NFSå
±æã䜿çšããŠperconapmm-serverãã€ã³ã¹ããŒã«ããããšããŸãããããœãããŠã§ã¢ãèµ·åããªãã£ããããåé€ããŸãããããããçºçããŸããã ïŒæ°žç¶çãªäž»åŒµã¯ãã®ãœãããŠã§ã¢ã§ã¯æ©èœããŸããã§ããïŒã å€ãè¯ãkubectl delete pods <podname> --force --grace-period=0
ããäžåºŠåŒãã§ãããšæããŸãã ããããåé¡ã¯ããã®ããããã©ãã«ããããã©ããã£ãŠç¥ãã®ããšããããšã§ãã
@ shinebayar-gãããããã£ãVMã«SSHã§æ¥ç¶ãã docker ps
ãŸãã
ããã«ã¯ãããŸããã§ãããVMãå°ãªãã®ã§ãã©ããæ£ãããã確èªããæ¹æ³ãå°ããŸããã :)
@ shinebayar-gããã¯ããŸããããããããŸããïŒ
kubectl describe pod/some-pod-name | grep '^Node:'
åãåé¡ã
docker ps
ã¯ãã³ã³ãããæåŸ
ã©ããã«çµäºïŒ0ïŒã§ã¯ãªããããããã¹ããŒã¿ã¹ã«ããããšãæ€åºããŸãã
ã³ã³ãããæåã§åé€ãããšã次ã®Dockerãã°ãšã³ããªã衚瀺ãããŸãã
level=warning msg="container kill failed because of 'container not found' or 'no such process': Cannot kill container
æ®å¿µãªãããåç·ãåæãããŠããŸãããåé¡ã¯ãããã»ã¹ãããååšããªãããšã§ããã
k8sv1.11.0ã§ã¯ãŸã ãã®åé¡ã«æ©ãŸãããŠããŸãã ããããã¯ãªãŒã³ã¢ããããããã«è¡ãããšã®ãã§ãã¯ãªã¹ãã¯æ¬¡ã®ãšããã§ãã
kubectl get
衚瀺ãããããã§ã¯ãããŸããã ãããã®ããã€ãã¯ãããããå®è¡ãããŠããKubeletã«ã®ã¿èªèãããŠãããããããŒã«ã«ã§ãã°ã¹ããªãŒã ã远跡ããå¿
èŠããããŸãkubectl edit
ã finalizers:
â - foregroundDeletion
ãåé€ããŸãããã«2ã€ã®ãã³ãïŒ
kubectl delete
ã³ãã³ããå¥ã®ãŠã£ã³ããŠã§ãããã¯ãããŸãŸã«ããŠãé²è¡ç¶æ³ãç£èŠã§ããŸãïŒãã§ã«äœåºŠããåé€ããããããã§ãïŒã kubectl delete
ã¯ãæåŸã«ã¹ã¿ãã¯ãããªãœãŒã¹ã解æŸããããšããã«çµäºããŸããä»æ¥ããã«çŽé¢ããŸããã
äœãè¡ããããïŒ
kubectl get pods
ã¯ãã¹ã¿ãã¯ããã³ã³ãã0/1 terminating
ïŒä»¥åã¯1/1 terminating
ïŒã衚瀺ããŸãfinalizers
ã»ã¯ã·ã§ã³ãåé€ããŸãã foregroundDeletion
ïŒ$ kubectl edit pod / nameïŒ->ã³ã³ãããããããªã¹ãããåé€ãããŸããkubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
ã·ãŒã¯ã¬ããã®ããŠã³ããéå§ãããšããåãåé¡ã«çŽé¢ããŠããŸãïŒå€ãã®ããããšå
±æãããŠããŸãïŒã ãããã¯terminating
ç¶æ
ã«ãªããæ°žä¹
ã«ããã«ãšã©ãŸããŸãã ç§ãã¡ã®ããŒãžã§ã³ã¯v1.10.0ã§ãã ã¢ã¿ãããããDockerã³ã³ãããŒã¯ãªããªããŸãããã --grace-period=0 --force
ãªãã·ã§ã³ã䜿çšããŠãããã匷å¶çã«åé€ããªãéããAPIãµãŒããŒã®åç
§ã¯æ®ããŸãã
æä¹ çãªè§£æ±ºçãæ¢ããŠããŸãã
ããŠãæè¿ãã¹ããŒãžã³ã°ã¯ã©ã¹ã¿ãŒã§runcãšã¯ã¹ããã€ãCVE-2019-5736ããã¹ãããŸããããã§ã«ãåç¥ã®ãšããããšã¯ã¹ããã€ãã¯ãã¹ããã·ã³äžã®runcãã€ããªãæžãæããŸãã ãã®ç Žå£çãªãšã¯ã¹ããã€ãã ãã®åŸãã¯ã©ã¹ã¿ãŒã§å¥åŠãªåäœãèŠãããŸããã ãã¹ãŠã®ããããçµäºç¶æ ã§ã¹ã¿ãã¯ããŸããã åé¿çã¯ã圱é¿ãåããããŒãããŒãžããã«ãŒãæåºããŠåã€ã³ã¹ããŒã«ããããšã§ããã ãã®åŸããã¹ãŠã®ããããšk8sã¯ã©ã¹ã¿ãŒã¯ä»¥åãšåãããã«æ£åžžã«æ©èœããŸãã ãã¶ãããã¯dockerã®åé¡ã§ããããããåã€ã³ã¹ããŒã«ããããšã§ããªãã®åé¡ã解決ããŸãïŒã ããããšã
ããã«æ°ããv1.13.3ãã€ã³ã¹ããŒã«ããŸãã ããã¯ç§ã«ãèµ·ãããŸãã ããã€ãã®ãããã«åãNFSããªã¥ãŒã ãããŠã³ãããã®ã§ããããšé¢ä¿ãããããã§ãã
ãã®åé¡ã¯ãååšããªãã·ãŒã¯ã¬ããã䜿çšããŠããªã¥ãŒã ãäœæããããšãããããã€ã¡ã³ããäœæãããšãã«çºçããŸãããã®ãããã€ã¡ã³ã/ãµãŒãã¹ãåé€ãããšã Terminating
ããããæ®ããŸãã
v.1.12.3ã§åãåé¡ã«çŽé¢ãã-grace-period = 0 --forceãŸãã¯--nowã¯äž¡æ¹ãšãç¡å¹ã«ãªãããããç¡å¹ã«å±ããã¹ããŒããã«ã»ãããåé€ããŸã
SMBïŒç§ã¯æãïŒïŒããŠã³ãã«é¢ããåãåé¡ïŒAzureãã¡ã€ã«ã¯https://docs.microsoft.com/en-us/azure/aks/azure-files-volumeã«åŸã£ãŠå ±æããŸãïŒã
13.3ãšåãåé¡
ããããã»ãŒ2æ¥éãçµäºãç¶æ
ã«ããã®ãšåãåé¡ããããŸãã
Linuxãã·ã³ïŒDebianïŒã§Minikubeã䜿çšããŠããŸãã
KubectlããŒãžã§ã³ïŒ
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:00:57Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
ããã¯ãããŒãžã§ã³ïŒ
minikube version: v0.34.1
@ardalanrazaviãªã2æ¥éçµäºããã®ã§ããïŒ 5åçµã£ãŠãåé€ãããªãå Žåã¯ã匷å¶çã«åé€ããŠãã ãã
@nmors
ãªã2æ¥éçµäºããã®ã§ããïŒ
ããã¯è¯ã質åã§ãã ç§ãã¡ã¯çãããç¥ãããã®ã§ãã
5åçµã£ãŠãåé€ãããªãå Žåã¯ã匷å¶çã«åé€ããŠãã ãã
匷å¶çã«åé€ãããšãã¯ã©ã¹ã¿ãŒã¯äžæŽåãªç¶æ ã«ãªããŸãã ïŒminikubeã䜿çšãããšãããã¯å®éã®ã¯ã©ã¹ã¿ãŒã§ã¯ãªãã®ã§ã確ãã«ããã»ã©å¿é ããå¿ èŠã¯ãããŸããïŒ
@AndrewSav
ççŽã«èšã£ãŠãããã§ä»ã®è§£æ±ºçã¯èŠåœãããŸããã
確ãã«ãã¯ã©ã¹ã¿ãŒã¯ãäžè²«æ§ã®ãªãç¶æ ãã®ãŸãŸã«ãªããŸãã ãããæ£ç¢ºã«äœãæå³ããã®ãç解ããããšæããŸãã 匷å¶ééã¯æªãã§ãã ç§ãããã奜ãã§ã¯ãããŸããããç§ã®å Žåã¯ãå¿ èŠã«å¿ããŠãªãœãŒã¹ãç Žæ£ããŠåãããã€ããããšã«æµæã¯ãããŸããã
ç§ã®å ŽåãNFSããŠã³ããåãããããã§ã®ã¿çµäºããããã«ã¹ã¿ãã¯ããŠããããã§ãã ãŸããã¯ã©ã€ã¢ã³ããããŠã³ããããšããåã«NFSãµãŒããŒãããŠã³ããå Žåã«ã®ã¿çºçããŸãã
ç§ã¯åé¡ãä¿®æ£ããŸãããçµäºããŠã¹ã¿ãã¯ããŠãããã¹ãŠã®ãããããã¹ãŠ1ã€ã®ããŒãäžã«ãããããŒããåèµ·åãããåé¡ããªããªã£ãããšãç¹å®ã§ããŸããã
@ nmors @ AndrewSavç§ã匷å¶åé€ãè¡ããŸããã
ããããåé€ããåã«nfsãµãŒããŒãåé€ãããšãã¢ã³ããŠã³ããæ°žä¹ ã«ãã³ã°ããããšãç¥ãããŠããŸãã ãã®å Žåã¯ãnfsãµãŒããŒãåžžã«æåŸã«åé€ãããããã«ãåé€ã泚æããããšããå§ãããŸãã
@ msau42ç§ã®NFSãµãŒããŒã¯k8sã¯ã©ã¹ã¿ãŒã®äžéšã§ã¯ãããŸãã-ããã¯ãã¹ãŠäžç·ã«å¥åã®ã¢ãã©ã€ã¢ã³ã¹ãšãã·ã³ã§ã
k8sã¯ã©ã¹ã¿ãŒã®äžéšã§ãããã©ããã¯é¢ä¿ãããŸããã nfsãµãŒããŒã«ã¢ã¯ã»ã¹ã§ããªãå Žåãã¢ã³ããŠã³ãã¯åã³ã¢ã¯ã»ã¹ã§ããããã«ãªããŸã§ãã³ã°ããŸãã
@ msau42ããã¯å¥åŠãªããšã§ãããªã³ã©ã€ã³ã«æ»ã£ããšãã§ããããããçµäºãç¶ããŠãããšç¢ºä¿¡ããŠããããã§ãã æ°ããããããèµ·åããæ£åžžã«ããŠã³ããããŸãã
ç§ã¯kubernetesã§NFSãµãŒããŒã䜿çšãããã®åŸã«ãã®äŸã瀺ããŸãããæ®å¿µãªããããã¯éåžžã«é »ç¹ã«çºçããŸãã
@ shinebayar-gç§ããã®ã¬ã€ãã«åŸããŸããããPVãšPVCãåãé€ããå±éã§çŽæ¥ããªã¥ãŒã ã次ã®ããã«å®çŸ©ããŸããã
volumeMounts:
- mountPath: /my-pod-mountpoint
name: my-vol
volumes:
- name: my-vol
nfs:
server: "10.x.x.x"
path: "/path/on/server"
readOnly: false
ãã以æ¥ãåé¡ã¯çºçããŠããŸãããããåçŽãªæ§æã®æ¹ãä¿¡é Œæ§ãé«ãããšãæåŸ ããŠããããçŽ1é±éã ãå€æŽããŸãããèŠãŠã¿ãŸããã...å€åããã§åé¡ã解決ããŸããïŒ
åé¿çãšããŠã /var/log/syslog
ããæåŸã®è¡ãããã€ãååŸãããOperation for ... remove / var / lib / kubelet / pods ... directorynotemptyãããnfsããªã©ã®ãšã©ãŒãæ€çŽ¢ããã¹ã¯ãªãããäœæããŸããã ..ããã€ã¹ãããžãŒã§ã... unmount.nfs "ãŸãã¯"å€ãNFSãã¡ã€ã«ãã³ãã« "ã
次ã«ãpod_idãŸãã¯pod fullãã£ã¬ã¯ããªã®ãããããæœåºããããŠã³ããããŠãããã®ïŒ mount | grep $pod_id
ïŒã確èªããŠããããã¹ãŠãã¢ã³ããŠã³ãããŠã察å¿ãããã£ã¬ã¯ããªãåé€ããŸãã æçµçã«ãkubeletãæ®ããå®è¡ããããããæ£åžžã«ã·ã£ããããŠã³ããŠåé€ããŸãã çµäºç¶æ
ã®ãããã¯ãããããŸããã
ãã®ã¹ã¯ãªãããcronã«å
¥ããŠãæ¯åå®è¡ããŸãã çµæãšããŠã3ã4ãæåŸã§ããä»ã®ãšããåé¡ã¯ãããŸããã
泚ïŒãã®ã¢ãããŒãã¯ä¿¡é Œæ§ãäœããã¯ã©ã¹ã¿ãŒã®ã¢ããã°ã¬ãŒãããšã«ãã§ãã¯ããå¿
èŠããããŸãããæ©èœããŸãã
ç§ã¯ããŒãžã§ã³1.10ã䜿çšããŠããŸãããä»æ¥ãã®åé¡ãçºçããŸãããç§ã®åé¡ã¯ãã·ãŒã¯ã¬ããããªã¥ãŒã ã®ããŠã³ãã®åé¡ã«é¢é£ããŠãããšæããŸããããã«ãããäžéšã®ã¿ã¹ã¯ãä¿çã«ãªããããããæ°žä¹ ã«çµäºç¶æ ã®ãŸãŸã«ãªãå¯èœæ§ããããŸãã
ããããçµäºããã«ã¯ã-grace-period = 0--forceãªãã·ã§ã³ã䜿çšããå¿ èŠããããŸããã
root@ip-10-31-16-222:/var/log# journalctl -u kubelet | grep dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds
Mar 20 15:50:31 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: I0320 15:50:31.179901 528 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-config-volume") pod "dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds" (UID: "e3d7c57a-4b27-11e9-9aaa-0203c98ff31e")
Mar 20 15:50:31 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: I0320 15:50:31.179935 528 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-xjlgc" (UniqueName: "kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-default-token-xjlgc") pod "dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds" (UID: "e3d7c57a-4b27-11e9-9aaa-0203c98ff31e")
Mar 20 15:50:31 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: I0320 15:50:31.179953 528 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "secret-volume" (UniqueName: "kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume") pod "dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds" (UID: "e3d7c57a-4b27-11e9-9aaa-0203c98ff31e")
Mar 20 15:50:31 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:50:31.310200 528 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\" (\"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\")" failed. No retries permitted until 2019-03-20 15:50:31.810156118 +0000 UTC m=+966792.065305175 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\") pod \"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds\" (UID: \"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\") : secrets \"data-platform.xxx-com\" not found"
Mar 20 15:50:31 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:50:31.885807 528 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\" (\"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\")" failed. No retries permitted until 2019-03-20 15:50:32.885784622 +0000 UTC m=+966793.140933656 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\") pod \"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds\" (UID: \"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\") : secrets \"data-platform.xxxxx-com\" not found"
Mar 20 15:50:32 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:50:32.987385 528 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\" (\"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\")" failed. No retries permitted until 2019-03-20 15:50:34.987362044 +0000 UTC m=+966795.242511077 (durationBeforeRetry 2s). Error: "MountVolume.SetUp failed for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\") pod \"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds\" (UID: \"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\") : secrets \"data-platform.xxx-com\" not found"
Mar 20 15:50:35 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:50:35.090836 528 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\" (\"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\")" failed. No retries permitted until 2019-03-20 15:50:39.090813114 +0000 UTC m=+966799.345962147 (durationBeforeRetry 4s). Error: "MountVolume.SetUp failed for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\") pod \"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds\" (UID: \"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\") : secrets \"data-platform.xxx-com\" not found"
Mar 20 15:50:39 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:50:39.096621 528 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\" (\"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\")" failed. No retries permitted until 2019-03-20 15:50:47.096593013 +0000 UTC m=+966807.351742557 (durationBeforeRetry 8s). Error: "MountVolume.SetUp failed for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\") pod \"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds\" (UID: \"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\") : secrets \"data-platform.xxx-com\" not found"
Mar 20 15:50:47 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:50:47.108644 528 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\" (\"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\")" failed. No retries permitted until 2019-03-20 15:51:03.10862005 +0000 UTC m=+966823.363769094 (durationBeforeRetry 16s). Error: "MountVolume.SetUp failed for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\") pod \"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds\" (UID: \"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\") : secrets \"data-platform.xxx-com\" not found"
Mar 20 15:51:03 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:51:03.133029 528 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\" (\"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\")" failed. No retries permitted until 2019-03-20 15:51:35.133006645 +0000 UTC m=+966855.388155677 (durationBeforeRetry 32s). Error: "MountVolume.SetUp failed for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\") pod \"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds\" (UID: \"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\") : secrets \"data-platform.xxxx-com\" not found"
Mar 20 15:51:35 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:51:35.184310 528 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\" (\"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\")" failed. No retries permitted until 2019-03-20 15:52:39.184281161 +0000 UTC m=+966919.439430217 (durationBeforeRetry 1m4s). Error: "MountVolume.SetUp failed for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\") pod \"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds\" (UID: \"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\") : secrets \"data-platform.xxx-com\" not found"
Mar 20 15:52:34 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:52:34.005027 528 kubelet.go:1640] Unable to mount volumes for pod "dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds_default(e3d7c57a-4b27-11e9-9aaa-0203c98ff31e)": timeout expired waiting for volumes to attach or mount for pod "default"/"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds". list of unmounted volumes=[secret-volume]. list of unattached volumes=[secret-volume config-volume default-token-xjlgc]; skipping pod
Mar 20 15:52:34 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:52:34.005085 528 pod_workers.go:186] Error syncing pod e3d7c57a-4b27-11e9-9aaa-0203c98ff31e ("dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds_default(e3d7c57a-4b27-11e9-9aaa-0203c98ff31e)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds". list of unmounted volumes=[secret-volume]. list of unattached volumes=[secret-volume config-volume default-token-xjlgc]
Mar 20 15:52:39 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:52:39.196332 528 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\" (\"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\")" failed. No retries permitted until 2019-03-20 15:54:41.196308703 +0000 UTC m=+967041.451457738 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\") pod \"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds\" (UID: \"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\") : secrets \"data-platform.xxxx-com\" not found"
Mar 20 15:54:41 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:54:41.296252 528 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\" (\"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\")" failed. No retries permitted until 2019-03-20 15:56:43.296229192 +0000 UTC m=+967163.551378231 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\") pod \"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds\" (UID: \"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\") : secrets \"data-platform.xxxx-com\" not found"
Mar 20 15:54:48 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:54:48.118620 528 kubelet.go:1640] Unable to mount volumes for pod "dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds_default(e3d7c57a-4b27-11e9-9aaa-0203c98ff31e)": timeout expired waiting for volumes to attach or mount for pod "default"/"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds". list of unmounted volumes=[secret-volume]. list of unattached volumes=[secret-volume config-volume default-token-xjlgc]; skipping pod
Mar 20 15:54:48 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:54:48.118681 528 pod_workers.go:186] Error syncing pod e3d7c57a-4b27-11e9-9aaa-0203c98ff31e ("dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds_default(e3d7c57a-4b27-11e9-9aaa-0203c98ff31e)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds". list of unmounted volumes=[secret-volume]. list of unattached volumes=[secret-volume config-volume default-token-xjlgc]
Mar 20 15:56:43 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:56:43.398396 528 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\" (\"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\")" failed. No retries permitted until 2019-03-20 15:58:45.398368668 +0000 UTC m=+967285.653517703 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e3d7c57a-4b27-11e9-9aaa-0203c98ff31e-secret-volume\") pod \"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds\" (UID: \"e3d7c57a-4b27-11e9-9aaa-0203c98ff31e\") : secrets \"data-platform.xxxx-com\" not found"
Mar 20 15:57:05 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:57:05.118566 528 kubelet.go:1640] Unable to mount volumes for pod "dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds_default(e3d7c57a-4b27-11e9-9aaa-0203c98ff31e)": timeout expired waiting for volumes to attach or mount for pod "default"/"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds". list of unmounted volumes=[secret-volume]. list of unattached volumes=[secret-volume config-volume default-token-xjlgc]; skipping pod
Mar 20 15:57:05 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:57:05.118937 528 pod_workers.go:186] Error syncing pod e3d7c57a-4b27-11e9-9aaa-0203c98ff31e ("dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds_default(e3d7c57a-4b27-11e9-9aaa-0203c98ff31e)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds". list of unmounted volumes=[secret-volume]. list of unattached volumes=[secret-volume config-volume default-token-xjlgc]
Mar 20 15:59:22 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:59:22.118593 528 kubelet.go:1640] Unable to mount volumes for pod "dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds_default(e3d7c57a-4b27-11e9-9aaa-0203c98ff31e)": timeout expired waiting for volumes to attach or mount for pod "default"/"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds". list of unmounted volumes=[secret-volume config-volume default-token-xjlgc]. list of unattached volumes=[secret-volume config-volume default-token-xjlgc]; skipping pod
Mar 20 15:59:22 ip-10-31-16-222.eu-west-2.compute.internal kubelet[528]: E0320 15:59:22.118624 528 pod_workers.go:186] Error syncing pod e3d7c57a-4b27-11e9-9aaa-0203c98ff31e ("dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds_default(e3d7c57a-4b27-11e9-9aaa-0203c98ff31e)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"dp-tag-change-ingestion-com-depl-5bd59f74c4-589ds". list of unmounted volumes=[secret-volume config-volume default-token-xjlgc]. list of unattached volumes=[secret-volume config-volume default-token-xjlgc]
--force --grace-period=0
ã䜿çšãããšãåç
§ãåé€ããã ãã§æžã¿ãŸã...ããŒãã«SSHã§æ¥ç¶ãããšãDockerã³ã³ãããå®è¡ãããŠããããšãããããŸãã
ç§ã®å ŽåãããŒãã«ã¡ã¢ãªäžè¶³ããããŸããã
ãããŠã«ãŒãã«ã¯ç¹æ¯å€ã殺ããŸãããããã¯ãããã®çµäºã劚ããããã§ãã
ããŒããåèµ·åãããšãããã¯ãªã¢ãããŸããã
ç§ã®çµéšã§ã¯ãããŒãã®sudo systemctl restart docker
ã圹ç«ã¡ãŸãïŒãã ããæããã«ããŠã³ã¿ã€ã ããããŸãïŒã
ãããŠãããã¯ãAïŒã¡ã¢ãªå¶éã«è¿ããBïŒCPUãäžè¶³ããŠããã©ã³ãã ããŒãã§ãŸã å®æçã«çºçããŠããŸãïŒãŸã ã¡ã¢ãªã«é¢é£ããŠããå¯èœæ§ã®ããkswapd0ã®åé¡ã®bcããŸãã¯å®éã®è² è·ïŒ
90æ¥éæäœããªããšãåé¡ã¯å€ããªããŸãã
/remove-lifecycle stale
ããŠãåé¡ãæ°èŠãšããŠããŒã¯ããŸãã
å€ãåé¡ã¯ãããã«30æ¥éæäœããªããšè
æããæçµçã«ã¯éããŸãã
ãã®åé¡ãä»ãã解決ã§ããå Žåã¯ã /close
ã
SIG-ãã¹ããkubernetes /ãã¹ãã»ã€ã³ãã©ããã³/ãŸãã¯ãžã®ãã£ãŒãããã¯ãéä¿¡fejta ã
/ lifecycle stale
å€ãåé¡ã¯ã30æ¥éæäœããªããšè
æããŸãã
/remove-lifecycle rotten
ããŠãåé¡ãæ°èŠãšããŠããŒã¯ããŸãã
è
ã£ãåé¡ã¯ãããã«30æ¥éæäœããªããšçµäºããŸãã
ãã®åé¡ãä»ãã解決ã§ããå Žåã¯ã /close
ã
SIG-ãã¹ããkubernetes /ãã¹ãã»ã€ã³ãã©ããã³/ãŸãã¯ãžã®ãã£ãŒãããã¯ãéä¿¡fejta ã
/ã©ã€ããµã€ã¯ã«è
æ
è
ã£ãåé¡ã¯ã30æ¥éæäœããªããšçµäºããŸãã
/reopen
åé¡ãåéããŸãã
/remove-lifecycle rotten
ããŠãåé¡ãæ°èŠãšããŠããŒã¯ããŸãã
SIG-ãã¹ããkubernetes /ãã¹ãã»ã€ã³ãã©ããã³/ãŸãã¯ãžã®ãã£ãŒãããã¯ãéä¿¡fejta ã
/éãã
@ fejta-botïŒãã®åé¡ã解決ããŸãã
察å¿ããŠããã®ïŒ
è ã£ãåé¡ã¯ã30æ¥éæäœããªããšçµäºããŸãã
/reopen
åé¡ãåéããŸãã
/remove-lifecycle rotten
ããŠãåé¡ãæ°èŠãšããŠããŒã¯ããŸããSIG-ãã¹ããkubernetes /ãã¹ãã»ã€ã³ãã©ããã³/ãŸãã¯ãžã®ãã£ãŒãããã¯ãéä¿¡fejta ã
/éãã
PRã³ã¡ã³ãã䜿çšããŠç§ãšããåãããããã®æé ã¯ããã¡ãããå
¥æã§ãkubernetes / test-infraãªããžããªã«å¯ŸããŠåé¡ã
ããã¯ãŸã éåžžã«æŽ»çºãªåé¡ã§ãããk8s1.15.4ãšRHELDocker1.13.1ã§ãã ãããã¯åžžã«Terminating
ãšã©ãŸããŸãããã³ã³ããã¯ãã§ã«ãªããªã£ãŠãããk8sã¯ããèªäœãç解ã§ããŸãããã人éã®æäœãå¿
èŠã§ãã ãã¹ãã¹ã¯ãªãããå®éã®PITAã«ããŸãã
/ reopen
/ remove-è
ã£ãã©ã€ããµã€ã¯ã«
@tuminoid ïŒèª²é¡/ PRãäœæããããå ±åç·šéè ã§ãªãéããåéããããšã¯ã§ããŸããã
察å¿ããŠããã®ïŒ
ããã¯ãŸã éåžžã«æŽ»çºãªåé¡ã§ãããk8s1.15.4ãšRHELDocker1.13.1ã§ãã ãããã¯åžžã«
Terminating
ãšã©ãŸããŸãããã³ã³ããã¯ãã§ã«ãªããªã£ãŠãããk8sã¯ããèªäœãç解ã§ããŸãããã人éã®æäœãå¿ èŠã§ãã ãã¹ãã¹ã¯ãªãããå®éã®PITAã«ããŸãã/ reopen
/ remove-è ã£ãã©ã€ããµã€ã¯ã«
PRã³ã¡ã³ãã䜿çšããŠç§ãšããåãããããã®æé ã¯ããã¡ãããå
¥æã§ãkubernetes / test-infraãªããžããªã«å¯ŸããŠåé¡ã
/ reopen
/ remove-è
ã£ãã©ã€ããµã€ã¯ã«
@mikesplain ïŒãã®åé¡ãåéããŸããã
察å¿ããŠããã®ïŒ
/ reopen
/ remove-è ã£ãã©ã€ããµã€ã¯ã«
PRã³ã¡ã³ãã䜿çšããŠç§ãšããåãããããã®æé ã¯ããã¡ãããå
¥æã§ãkubernetes / test-infraãªããžããªã«å¯ŸããŠåé¡ã
ããã§ãåãã§ãããããã19å以äžçµäºãã§ãŒãºã§ã¹ã¿ãã¯ããŸããã ã³ã³ããã¯æ£åžžã«çµäºããŸããããKubernetesã¯ãŸã äœããåŸ ã€å¿ èŠããããšèããŠããŸãã
Name: worker-anton-nginx-695d8bd9c6-7q4l9
Namespace: anton
Priority: 0
Status: Terminating (lasts 19m)
Termination Grace Period: 30s
IP: 10.220.3.36
IPs: <none>
Controlled By: ReplicaSet/worker-anton-nginx-695d8bd9c6
Containers:
worker:
Container ID: docker://12c169c8ed915bc290c14c854a6ab678fcacea9bb7b1aab5512b533df4683dd6
Port: 8080/TCP
Host Port: 0/TCP
State: Terminated
Exit Code: 0
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Mon, 01 Jan 0001 00:00:00 +0000
Ready: False
Restart Count: 0
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Events: <none>
ã€ãã³ãããã°ããããŸãã...
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-17T17:16:09Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.8-gke.2", GitCommit:"188432a69210ca32cafded81b4dd1c063720cac0", GitTreeState:"clean", BuildDate:"2019-10-21T20:01:24Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}
a
kubeletãã°ããã§ãã¯ããŠãããªã¥ãŒã ã®ã¢ã³ããŠã³ãã®å€±æãå€ç«ãããããã«é¢ããã¡ãã»ãŒãžããªããã©ããã確èªã§ããŸããïŒ
ç§ããããèŠãŸãã
E1206 03ïŒ05ïŒ40.247161 25653 kubelet_volumes.goïŒ154]å€ç«ããããã "0406c4bf-17e3-4613-a526-34e8a6cee208"ãèŠã€ãããŸããããããªã¥ãŒã ãã¹ããŸã ãã£ã¹ã¯ã«ååšããŸãïŒãããšåæ§ã®ãšã©ãŒãåèš8ã€ãããŸããã ããããèŠãããã«åé·æ§ãäžããŠãã ããã
ç§ãèŠãŸããã kubectlãDockerã³ã³ããã«æ¥ç¶ã§ãããçµäºããããçŸåšååšããããã«æ°ããããããäœæã§ããªããšæå¥ãèšãããããã°ã確èªã§ããŸããã ãããè¿·æã§ãã
ãããçµéšããŠããŠãKubernetesãå€ãããããé©åã«ã¯ãªãŒã³ã¢ãããããã©ããã確èªããå¿
èŠãããã®ã¯ããªãé¢åã§ãã
ããŸãããã°ãããã¯ããã«ä¿®æ£ãããŸãã
ãããŠããã®åé¡ã¯ã©ãã§ããïŒ è§£æ±ºããŸãããïŒ ç§ãåãã§ãããããã¯ããã«ã¯èµ·ãããŸããããããŒãã®éå§åŸãã°ããããŠãããŒãããªã»ãããããšããã°ããã®éããã¹ãŠãè¯å¥œã«ãªããŸã
ãããã«ãã¡ã€ãã©ã€ã¶ãŒããããåé€ãããªãããã«ãªã£ãŠããããšã確èªã§ããŸããïŒ
çºè¡ããããããã«ãã¡ã€ãã©ã€ã¶ãŒã¯ãããŸãã
åèãŸã§ã«ã以äžã䜿çšããŠåŒ·å¶åé€ã§ããã解決ããŸããã
kubectl delete pods <pod> --grace-period=0 --force
ãããŠãããã§ããããæ£åžžã«çµäºã§ãããšæããŸãã ãã以æ¥ãç§ã¯ãã®åé¡ãäºåºŠãšçµéšããŠããŸããã ãã以æ¥æŽæ°ããŠããå¯èœæ§ãããã®ã§ãããŒãžã§ã³ã®åé¡ã§ããå¯èœæ§ããããŸãããåé¡ã確èªããŠããããªãæéãçµéããŠããããã100ïŒ ã§ã¯ãããŸããã
ããã¯ããããã®ã¡ã¢ãªãäžè¶³ããŠãããšãã«çºçããŸãã ã¡ã¢ãªäœ¿çšéãåã³æžå°ãããŸã§çµäºããŸããã
åèãŸã§ã«ã以äžã䜿çšããŠåŒ·å¶åé€ã§ããã解決ããŸããã
kubectl delete pods <pod> --grace-period=0 --force
ãããŠãããã§ããããæ£åžžã«çµäºã§ãããšæããŸãã ãã以æ¥ãç§ã¯ãã®åé¡ãäºåºŠãšçµéšããŠããŸããã ãã以æ¥æŽæ°ããŠããå¯èœæ§ãããã®ã§ãããŒãžã§ã³ã®åé¡ã§ããå¯èœæ§ããããŸãããåé¡ã確èªããŠããããªãæéãçµéããŠããããã100ïŒ ã§ã¯ãããŸããã
ããã¯ç§ã®ããã«åãã
kubectl delete pods <pod> --grace-period=0 --force
ã¯äžæçãªä¿®æ£ã§ãã圱é¿ãåãããããã®ããããã§ãã§ã€ã«ãªãŒããŒãçºçãããã³ã«ãæåã§ä¿®æ£ãå®è¡ããããããŸããã ç§ã®é£Œè²ä¿ã®ãããã¯ãminikubeãšAzureAKSã§çµäºããŠããŸããã
2020幎3æ9æ¥æŽæ°
preStopã©ã€ããµã€ã¯ã«ããã¯ã䜿çšããŠãããããæåã§çµäºããŸããã ç§ã®åç©åã®é£Œè²ä¿ã®ãããã¯çµäºç¶æ
ã§ç«ã¡åŸçããŠããŠãã³ã³ããå
ããã®ã¿ãŒã ã·ã°ãã«ã«å¿çããŸããã§ããã ç§ã¯åºæ¬çã«åããããã§ã¹ããä»ã®å Žæã§å®è¡ããŠããŠããã¹ãŠãæ£ããçµäºããŸãããæ ¹æ¬çãªåå ãäœã§ãããã¯ããããŸããã
åãåé¡ãéåžžã«è¿·æ
åãåé¡:(ãããã3æ¥ä»¥éçµäºãç¶ããŠãã
åèãŸã§ã«ã以äžã䜿çšããŠåŒ·å¶åé€ã§ããã解決ããŸããã
kubectl delete pods <pod> --grace-period=0 --force
ãããŠãããã§ããããæ£åžžã«çµäºã§ãããšæããŸãã ãã以æ¥ãç§ã¯ãã®åé¡ãäºåºŠãšçµéšããŠããŸããã ãã以æ¥æŽæ°ããŠããå¯èœæ§ãããã®ã§ãããŒãžã§ã³ã®åé¡ã§ããå¯èœæ§ããããŸãããåé¡ã確èªããŠããããªãæéãçµéããŠããããã100ïŒ ã§ã¯ãããŸããã
ãŸãã --force
ãã©ã°ã¯ãå¿
ãããããããåé€ãããããšãæå³ããããã§ã¯ãªãã確èªãåŸ
ããªãã ãã§ãïŒãããŠãç§ã®ç解ã§ã¯ãåç
§ãåé€ããŸãïŒã èŠåThe resource may continue to run on the cluster indefinetely
è¿°ã¹ãããŠããããã«ã
ç·šéïŒç§ã¯æ å ±ãäžååã§ããã ãããªãåæ©ã«ã€ããŠã¯ã以äžã®elrok123sã³ã¡ã³ããåç §ããŠãã ããã
åèãŸã§ã«ã以äžã䜿çšããŠåŒ·å¶åé€ã§ããã解決ããŸããã
kubectl delete pods <pod> --grace-period=0 --force
ãããŠãããã§ããããæ£åžžã«çµäºã§ãããšæããŸãã ãã以æ¥ãç§ã¯ãã®åé¡ãäºåºŠãšçµéšããŠããŸããã ãã以æ¥æŽæ°ããŠããå¯èœæ§ãããã®ã§ãããŒãžã§ã³ã®åé¡ã§ããå¯èœæ§ããããŸãããåé¡ã確èªããŠããããªãæéãçµéããŠããããã100ïŒ ã§ã¯ãããŸããã
ãŸãã
--force
ãã©ã°ã¯ãå¿ ãããããããåé€ãããããšãæå³ããããã§ã¯ãªãã確èªãåŸ ããªãã ãã§ãïŒãããŠãç§ã®ç解ã§ã¯ãåç §ãåé€ããŸãïŒã èŠåThe resource may continue to run on the cluster indefinetely
è¿°ã¹ãããŠããããã«ã
æ£è§£ã§ãããèŠç¹ã¯--grace-period=0
匷å¶çã«åé€ãå®è¡ããããšã§ã:)ã³ã¡ã³ããé¢é£ããçç±ãããããŸããïŒ/
åºã«ãªãã³ã³ãããããã®ã§ã圌ã®ã³ã¡ã³ãã¯é©åã ãšæããŸã
ïŒdockerãªã©ïŒã¯ãŸã å®è¡ãããŠããŠãå®å
šã«åé€ãããŠããªãå¯èœæ§ããããŸãã
ããããåé€ãããããšããå¹»æ³ã¯ãæã«ã¯å°ã誀解ãæããã®ã§ã
2020幎6æ4æ¥æšææ¥åå9æ16åãã³ããŒã¹ãã£ãŒãã³ãã±ã€ã<
[email protected]>æžã蟌ã¿ïŒ
åèãŸã§ã«ã以äžã䜿çšããŠåŒ·å¶åé€ã§ããã解決ããŸããã
kubectlåé€ããã
--grace-period = 0 --force ãããŠãããã§ããããæ£åžžã«çµäºã§ãããšæããŸãã ãã以æ¥ãç§ã¯
åã³åé¡ãçºçããŠããŸããã ãã以æ¥ãç§ã¯ããããæŽæ°ããŠããŸãã
ããŒãžã§ã³ã®åé¡ã§ããå¯èœæ§ããããŸããã100ïŒ ã§ã¯ãããŸããã
ç§ã¯åé¡ãèŠãŠããŸããããŸãã-forceãã©ã°ã¯ãå¿ ãããããããåé€ãããããšãæå³ããããã§ã¯ãããŸããã
確èªãåŸ ããªãã ãã§ãïŒãããŠç§ã®åç §ãåé€ããŸã
ç解ïŒã èŠåã§è¿°ã¹ãããŠããããã«ããªãœãŒã¹ã¯å®è¡ãç¶ç¶ããå¯èœæ§ããããŸã
ã¯ã©ã¹ã¿ãŒäžã§ç¡æéã«ãæ£è§£ã§ãããèŠç¹ã¯--grace-period = 0ãåé€ã匷å¶ããããšã§ã
èµ·ãã:)ããªãã®ã³ã¡ã³ããé¢é£ããŠããçç±ãããããªãïŒ/â
ããªããã³ã¡ã³ãããã®ã§ããªãã¯ãããåãåã£ãŠããŸãã
ãã®ã¡ãŒã«ã«çŽæ¥è¿ä¿¡ããGitHubã§è¡šç€ºããŠãã ãã
https://github.com/kubernetes/kubernetes/issues/51835#issuecomment-638840136 ã
ãŸãã¯è³Œèªã解é€ãã
https://github.com/notifications/unsubscribe-auth/AAH34CDZF7EJRLAQD7OSH2DRU6NCRANCNFSM4DZKZ5VQ
ã
ããã¯ç¢ºãã«ç§ã®ãã€ã³ãã§ãã ããã䜿çšãããšã --force
ã¡ãœããã¯ãåºã«ãªãè² è·ãããŒããå§è¿«ãããªã¹ã¯ããããå¿
ãããå
ã®åé¡ãä¿®æ£ãããšã¯éããŸããã ææªã®å Žåãããã¯ãç§ããããèŠãããšãã§ããªãå Žåãããã¯ååšããªããã§ã-ããã¯æ€åºããã®ãããã«é£ãããªãå¯èœæ§ãããä¿®æ£ã§ãã
ãŸãã¯ã --grace-period=0
ã¯ãåºã«ãªãã³ã³ãã@ elrok123ã®åé€ã匷å¶ããããšãä¿èšŒãããŠãããšèšã£ãŠããŸããïŒ
ãã®å Žåãç§ã®ã³ã¡ã³ãã¯èª€ã£ãç¥èã«åºã¥ããŠãããç¡é¢ä¿ã§ããã --grace-period=0
ã䜿çšããŠãããšãã«å®è¡äžã®ã³ã³ãããŒãé¢ãããªã¹ã¯ãæ®ã£ãŠããå Žåã¯ãç§ã®äž»åŒµãããã§ãã
@oscarlofwenhamnç§ãç¥ãéããããã¯ãã®ãããå ã®ãã¹ãŠã®ããã»ã¹ã§sigkillãå¹æçã«å®è¡ãããŸã³ãããã»ã¹ã確å®ã«åé€ããŸãïŒåºå žïŒããããã®çµäºãã®ãã€ã³ã6- https ïŒ//kubernetes.io/docs/concepts / pods / pod /ïŒ ïŒãïŒtext = WhenïŒ 20theïŒ 20graceïŒ 20periodïŒ 20expiresãperiodïŒ 200ïŒ 20ïŒimmediateïŒ 20deletionïŒïŒãããããæ£åžžã«åé€ããŸãïŒããã«ã¯å®è¡ãããªãå ŽåããããŸãããå®è¡ãããŸãïŒãèµ·ãããŸããïŒ
ã¬ã€ãã«ã¯ãåç §ã¯åé€ãããŸããããããèªäœã¯åé€ãããªããšèšèŒãããŠããŸãïŒãœãŒã¹ïŒã匷å¶åé€ã-httpsïŒ//kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/ ïŒãã ããgrace-period = 0ã¯ãããã«ã§ã¯ãªããããããå¹æçã«sigkillããå¿ èŠããããŸãã
ééããã·ããªãªãåŠçããããã®ããã¥ã¡ã³ããšæšå¥šãããæ¹æ³ãèªãã§ããŸãã ç§ãç¹ã«ééããåé¡ã¯ãåçºããåé¡ã§ã¯ãªããäžåºŠèµ·ãã£ãåé¡ã§ããã ããã«å¯Ÿããå®éã®ä¿®æ£ã¯ãããã€ã¡ã³ããä¿®æ£ããŠãããšæããŸãããããã«å°éãããŸã§ã¯ããã®æ¹æ³ã圹ç«ã€ã¯ãã§ãã
@ elrok123ããªãªã¢ã³ã-ç§ã¯ç¢ºãã«æ å ±
çŸåšããããã¯çµäºç¶æ ã§2æ¥ä»¥äžã¹ã¿ãã¯ããŠããŸãã
ç§ã«ãšã£ãŠãåå空éã¯Terminating
ã¹ã¿ãã¯ããŠããŸãã ãããã¯ãªã¹ããããŠããŸããã ãµãŒãã¹ãªã...ãªãã åå空éã¯ç©ºã§ãã ããã§ã...çµäºã§ç«ã¡åŸçã
@JoseFMPã¯kubectlã䜿çšããŠåå空éããyamlããªã¯ãšã¹ãããŸããããã«ã¯ãããã»ã¹ãä¿çããŠãããã¡ã€ãã©ã€ã¶ãŒãå«ãŸããŠããå¯èœæ§ããããŸãã
@JordyBottelierããããšãããããŸãã
ãã¡ã€ãã©ã€ã¶ãŒã¯ãããŸããã ãŸã ç«ã¡åŸçTerminating
@JoseFMPã¯ãåé€ããïŒã¹ã¯ãªããã§ããä¿åããŠãã/ script_nameãå®è¡ããã ãã§ãã
`` `
set -eo pipefail
dieïŒïŒ{echo "$ *" 1>ïŒ2; åºå£1; }
needïŒïŒ{
ã©ã® "$ 1"ïŒ> / dev / null || ããã€ã㪠'$ 1'ããããŸããããå¿
é ã§ãã
}
ãjqããå¿
èŠ
ãã«ãŒã«ããå¿
èŠ
ãkubectlããå¿
èŠ
PROJECT = "$ 1"
ã·ãã
test -n "$ PROJECT" || æ»ã¬ "åŒæ°ããããŸããïŒkill-ns
kubectlãããã·ïŒ> / dev / nullïŒ
PROXY_PID = $ïŒ
killproxyïŒïŒ{
$ PROXY_PIDã殺ã
}
ãã©ããkillproxyEXIT
sleep 1ïŒãããã·ã«1ç§äžãã
kubectl get namespace "$ PROJECT" -o json | jq'delïŒ.spec.finalizers [] | selectïŒ "kubernetes"ïŒïŒ '| curl -s -k -H "Content-TypeïŒapplication / json" -X PUT -o / dev / null --data-binary @ -http ïŒ// localhost ïŒ8001 / api / v1 / namespaces / $ PROJECT / finalize && echo "Killed namespaceïŒ$ PROJECT" `` `
ãŸããããã«ééããããã§ããã€ã³ãã©ã¹ãã©ã¯ãã£ã®ã©ãã«ã衚瀺ãããªããªã£ããããŽãŒã¹ããšããŠå®è¡ãããŠãã1ã€ã®ããããå«ããè€æ°ã®ããããçµäºãç¶ããŠããŸãïŒãªã¯ãšã¹ããåŠçãããããã€ã¹ã±ãŒã«ã§ããªã¯ãšã¹ããåŠçãããŠããããšã確èªã§ããŸãïŒãŒãã®ïŒã
ãã®ãããã®å¯èŠæ§ãå¶åŸ¡ããããŸããããã¹ãŠã®ããŒãã匷å¶çã«ã·ã£ããããŠã³ããã«ããã®ãããªç¶æ³ããã©ãã«ã·ã¥ãŒãã£ã³ã°ããæ¹æ³ãå°ããŸãã
ãŸããããã«ééããããã§ããã€ã³ãã©ã¹ãã©ã¯ãã£ã®ã©ãã«ã衚瀺ãããªããªã£ããããŽãŒã¹ããšããŠå®è¡ãããŠãã1ã€ã®ããããå«ããè€æ°ã®ããããçµäºãç¶ããŠããŸãïŒãªã¯ãšã¹ããåŠçãããããã€ã¹ã±ãŒã«ã§ããªã¯ãšã¹ããåŠçãããŠããããšã確èªã§ããŸãïŒãŒãã®ïŒã
ãã®ãããã®å¯èŠæ§ãå¶åŸ¡ããããŸããããã¹ãŠã®ããŒãã匷å¶çã«ã·ã£ããããŠã³ããã«ããã®ãããªç¶æ³ããã©ãã«ã·ã¥ãŒãã£ã³ã°ããæ¹æ³ãå°ããŸãã
ããŒãã®dockerã«ã¢ã¯ã»ã¹ããå¿
èŠããããŸãã
ç§ã®dink
ïŒhttps://github.com/Agilicus/dinkïŒã䜿çšãããšãDockerã¢ã¯ã»ã¹ââä»ãã®ã·ã§ã«ä»ãã®ããããŸãã¯ããããžã®SSHã衚瀺ãããŸãã
docker ps -a
docker stop ####
幞éãã
æ瀺ãããããšãã
ç§ã¯æçµçã«ããã解決ããããšãã§ããŸããããããã§ããããã©ã®ããã«çºçããã®ãå°ãæžæããŸããïŒç§ã«ãšã£ãŠãããã¯å®å šã«èŠããŸããã§ããïŒã å®çšŒåäžã ã£ãã®ã§ãç©äºã¯å°ãå€å¿ã§ã蚺æãå®è¡ã§ããŸããã§ãããããããåã³çºçããå Žåã¯ãããè¯ããã°ã¬ããŒããäœæã§ããããšãé¡ã£ãŠããŸãã
åæ§ã®çç¶ãèŠããããšããããã¯çµäºãç¶ããŸãïŒèå³æ·±ãããšã«ããããã¯ãã¹ãŠãæºå/掻æ°ã®ããã®execã¿ã€ãã®ãããŒããåããŠããŸãïŒã ãã°ãèŠããšã次ã®ããã«è¡šç€ºãããŸããkubelet[1445]ïŒI1022 10ïŒ26ïŒ32.203865 1445prober.goïŒ124]ãtest-service-74c4664d8d-58c96_defaultïŒ822c3c3d-082a-4dc9-943c-19f04544713eïŒïŒtestãã®æºåãããŒã-service "failedïŒfailureïŒïŒOCIã©ã³ã¿ã€ã execã倱æããŸããïŒexecã倱æããŸããïŒåæ¢ããã³ã³ãããŒãå®è¡ã§ããŸããïŒäžæã ãã®ã¡ãã»ãŒãžã¯æ°žé ã«ç¹°ãè¿ãããexecãããŒããtcpSocketã«å€æŽãããšãããããçµäºã§ããããã«èŠããŸãïŒãã¹ãã«åºã¥ããŠããã©ããŒã¢ããããŸãïŒã ãããã«ã¯ãå®è¡äžãã®ã³ã³ããã®1ã€ããããŸããããæºåå®äºãã¯ãªãããã§ãããå®è¡äžãã®ã³ã³ããã®ãã°ã«ã¯ããµãŒãã¹ãåæ¢ãããã®ããã«è¡šç€ºãããŸãã
ããã¯ãããŒãã®è² è·ãé«ããvm.max_map_countãããã©ã«ããããé«ãå€ã«èšå®ãããŠããå Žåãcontainerd 1.4.0ã§çºçããŸããcontainerd-shimã¯ãstdout fifoãæåºãããæåºãããã®ããããã¯ããŸãããdockerdã¯geãå®è¡ã§ããŸãããããã»ã¹ããªããªã£ãããšãcontainerdããã®ã€ãã³ã/確èªå¿çã
@discantoãã®æ å ±ãå ±æããŠããããŸãã åé¡ã¯ä¿®æ£ãŸãã¯è¿œè·¡ãããŠããŸããïŒ
@ Random-Liu
ãã°ã¯3幎以äžéããŠããŸãã ããããçµäºãç¶ããã®ã¯ãããŸããŸãªçç±ã§çºçããå¯èœæ§ããããŸãã ã±ãŒã¹ãå ±åãããšãã¯ãããããåããªããªã£ããã©ããã確èªããããã«ãããã€ãã®kubeletãã°ãæçš¿ãããšéåžžã«åœ¹ç«ã¡ãŸãã
æãåèã«ãªãã³ã¡ã³ã
IBMCloudã®Kubernetes1.8.2ã§ãåãåé¡ãçºçããŸãã æ°ããããããéå§ãããåŸãå€ããããã¯çµäºãç¶ããŸãã
kubectlããŒãžã§ã³
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.2-1+d150e4525193f1", GitCommit:"d150e4525193f1c79569c04efc14599d7deb5f3e", GitTreeState:"clean", BuildDate:"2017-10-27T08:15:17Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
kubectl delete pod xxx --now
ãškubectl delete pod foo --grace-period=0 --force
ã䜿çšããŸãããç¡é§ã«ãªããŸããã