Kubeadm: Kubeadm broken on armv6l

Created on 23 Apr 2017  ·  21Comments  ·  Source: kubernetes/kubeadm

I tried to install kubeadm on my raspberry pi zero W, but get an "Illegal Instruction"
On a raspberry pi 3 (armv7) it works just fine.

Most helpful comment

Is it possible to discuses a reintegration of armv6l Support. I found many Posts showing the interest for using Kubernetes on Pi Zero and other armv6l Pi Devices. P Zero is good for hosting Micro Services in Kubernetes or Swarm Cluster Environments. Docker Swarm works well for me. So it would be nice if anyone could recycle the discussion. Pi clusterhat is properly a nice demo infrastructure.

All 21 comments

I am facing the same issue with kubeadm 1.6.1 on a Raspberry Pi Model B+, also armv6.

$ kubelet --help
Illegal instruction

$ uname -a
Linux pi1 4.4.50-hypriotos+ #2 PREEMPT Sun Mar 19 14:44:01 UTC 2017 armv6l GNU/Linux

I downgraded to kubeadm 1.5.6 and it works. 1.6.0 gives the same error as 1.6.1.

@clabu yeah downgrading to 1.5.6 works but can't join a 1.6+ cluster.

First off, thanks for using Kubernetes on ARM :smile:!

This is a known issue; it was discussed in https://github.com/kubernetes/kubernetes/issues/38067 and we dropped armel support (which part of RPi 1 uses when cross-compiling).

Basically armhf (GOARM=7) can't run on the Pi 1, so we used armel with GOARM=6 in -v1.5 to support RPi 1. However, we went all armhf in v1.6, hence it's not working on the Pi 1.

Deprecate armel and use armhf images instead and use GOARM=7 instead of GOARM=6
Motivation:

  • The only GOARM=6 board Go will support in go1.8 is the Raspberry Pi 1 which is just too slow to run newer Kubernetes versions.
  • Small performance improvements when using GOARM=7
  • The armel (http://hub.docker.com/u/armel) images are not updated as often as the armhf (http://hub.docker.com/u/armhf) images are.

For example, https://hub.docker.com/r/armel/debian/ was updated 8 months ago which is really bad from a security standpoint, vs https://hub.docker.com/r/armhf/debian/ which was updated 3 days ago.

Also, with the armhf switch, we were able to use https://hub.docker.com/r/armhf/alpine, which is great.

Hope it helps, but sorry for not being able to support the RPi 1 anymore.

If you want to help with documenting it/spreading the word, please do or come with suggestions

I'm having this same problem on a Pi Zero

Linux p1 4.9.59+ #1047 Sun Oct 29 11:47:10 GMT 2017 armv6l GNU/Linux

Is it possible to discuses a reintegration of armv6l Support. I found many Posts showing the interest for using Kubernetes on Pi Zero and other armv6l Pi Devices. P Zero is good for hosting Micro Services in Kubernetes or Swarm Cluster Environments. Docker Swarm works well for me. So it would be nice if anyone could recycle the discussion. Pi clusterhat is properly a nice demo infrastructure.

Looking at the current docker.io build for the pi zero,
Go version: go1.9.3
and docker version: 18.02.0-ce

It does seem to be using a recent version of go.

I agree that there is not enough ram to use k8s on it in a standalone mode, but having it be a slave on a bigger master, it should have enough resources to do some useful things still.

Does anyone know if it's possible just to build from source to use my pi zeros as k8s nodes?

For example, https://hub.docker.com/r/armel/debian/ was updated 8 months ago which is really bad from a security standpoint, vs https://hub.docker.com/r/armhf/debian/ which was updated 3 days ago.

This is not true today since official images on different architectures are updated simultaneously. For example https://hub.docker.com/r/arm32v5/debian/, https://hub.docker.com/r/arm32v7/debian/ and https://hub.docker.com/r/amd64/debian/ were all updated 9 days ago.

Also, with the armhf switch, we were able to use https://hub.docker.com/r/armhf/alpine, which is great.

https://hub.docker.com/r/arm32v6/alpine/ runs well on Pi Zero.

I hope that you will reconsider it. Stopping Pi Zero from running latest k8s is so disappointing.

@luxas

+1. some confusion has happened as the hub was rearranged and the older repos are still around. The newer ones do seem to be getting frequent updates.

Hi @juliancheal ,

I am still in the middle of building k8s on ClusterHAT, but I was able to compile and build binaries for Pi Zero.

Basically, I have followed the below with some modifications:
https://povilasv.me/raspberrypi-kubelet/

I worked on wsl:
Linux DESKTOP-6GRDDIN 4.4.0-17134-Microsoft #48-Microsoft Fri Apr 27 18:06:00 PST 2018 x86_64 x86_64 x86_64 GNU/Linux

1 install gcc-arm-linux-gnueabi instead of gcc-arm-linux-gnueabihf

sudo apt-get install gcc-arm-linux-gnueabi <- change

2 before building for linux/arm, make two modifications to set_platform_envs() in hack/lib/golang.sh

* add GOARM
export GOOS=${platform%/}
export GOARCH=${platform##
/}
export GOARM=5 <- add
*
change CC
case "${platform}" in
"linux/arm")
export CGO_ENABLED=1
export CC=arm-linux-gnueabi-gcc <-change
;;

GOARM has to be 5. If you specify 6, you will get a linker error during the build. (Which I couldn't resolve.)

@shinichi-hashitani It works for my Pi Zero! Thanks!

Also I resolved your problem on linker error. For Pi Zero, set GOARM=6 and keep gcc-arm-linux-gnueabihf. However for Pi 1 you gotta set GOARM=5 and use gcc-arm-linux-gnueabi instead.

@shinichi-hashitani this is great! I will give it a try thanks!

@shinichi-hashitani Did you use make all KUBE_BUILD_PLATFORMS=linux/arm to build it? And if you used kubeadm to set up your cluster, how did you do that? Did you copy over kubelet, the init script povilasv mentioned, kubeadm, and kubectl? Did it work?

@dbwest Yes, I used make all to build binaries. The exact commands I used were:

make all WHAT=cmd/kube-proxy KUBE_VERBOSE=5 KUBE_BUILD_PLATFORMS=linux/arm
make all WHAT=cmd/kubelet KUBE_VERBOSE=5 KUBE_BUILD_PLATFORMS=linux/arm
make all WHAT=cmd/kubectl KUBE_VERBOSE=5 KUBE_BUILD_PLATFORMS=linux/arm

I needed binaries for nodes, so only those three binaries were needed.

I didn't use kubeadm. I was following Kelsey Hightower's "Kubernetes the Hard Way". As described here, you just need to put those binaries in appropriate location.

@shinichi-hashitani any idea what version of kubernetes you were building?

I haven't had any luck getting this to build for arm v6 (hoping to run on a pi zero w).

On versions >= 1.12.0 I get something like this...

vendor/github.com/google/cadvisor/accelerators/nvidia.go:30:2: build constraints exclude all Go files in /private/var/folders/hn/gt2l8vq56vx9slvwry43xmz40000gn/T/tmp.A83ZihlF/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/mindprince/gonvml
!!! [0511 07:36:41] Call tree:
!!! [0511 07:36:41]  1: /private/var/folders/hn/gt2l8vq56vx9slvwry43xmz40000gn/T/tmp.A83ZihlF/hack/lib/golang.sh:601 kube::golang::build_some_binaries(...)
!!! [0511 07:36:41]  2: /private/var/folders/hn/gt2l8vq56vx9slvwry43xmz40000gn/T/tmp.A83ZihlF/hack/lib/golang.sh:736 kube::golang::build_binaries_for_platform(...)
!!! [0511 07:36:41]  3: hack/make-rules/build.sh:27 kube::golang::build_binaries(...)
!!! Error in /private/var/folders/hn/gt2l8vq56vx9slvwry43xmz40000gn/T/tmp.A83ZihlF/hack/lib/golang.sh:561
  Error in /private/var/folders/hn/gt2l8vq56vx9slvwry43xmz40000gn/T/tmp.A83ZihlF/hack/lib/golang.sh:561. 'go install "${build_args[@]}" "$@"' exited with status 1

And from >= 1.10.0 & < 1.12.0 (1.10.0 was the earliest i've tried so far), I get something like this...

F0511 07:39:30.480641   26683 openapi.go:116] Failed loading boilerplate: open /private/var/folders/hn/gt2l8vq56vx9slvwry43xmz40000gn/T/tmp.A83ZihlF/_output/local/go/src/k8s.io/gengo/boilerplate/boilerplate.go.txt: no such file or directory
!!! Error in ./hack/run-in-gopath.sh:33
  Error in ./hack/run-in-gopath.sh:33. '"${@}"' exited with status 255
Call stack:
  1: ./hack/run-in-gopath.sh:33 main(...)
Exiting with status 1
make[1]: *** [pkg/generated/openapi/zz_generated.openapi.go] Error 1
make: *** [generated_files] Error 2

EDIT: Nevermind...looks like if I build on a linux machine it works. I was trying to do it from my mac

@ammmze ,

Not exactly sure what is causing issues at your end, but the below is the details on my end:
Kubernetes - 1.10.2
Go - 19.4
I used WSL (Probably Ubuntu 16.x) for cross compiling those binaries.

Again, I followed the below with some modifications:
https://povilasv.me/raspberrypi-kubelet/
You can refer to this to confirm on the steps to go through.

I have prepared my note and exact steps I followed, but sorry that is only available in Japanese:
https://qiita.com/ShinHashitani/items/ea9ffdefce8ca5786da6

Any movement on adding back armel support for pi zero’s? I have quite a few laying around and would love to make a low cost/power cluster for demo purposes

Any movement on adding back armel support for pi zero’s? I have quite a few laying around and would love to make a low cost/power cluster for demo purposes

hi, as you can see in the discussion above, core Kubernetes dropped support for armv6l.
so i don't think there is a chance this support will be re-added.

if you want to use k8s / kubeadm on armv6l you must recompile everything (including CNI images).

I'm just chiming in to say that I have successfully compiled K8s 1.18.3 from source by compiling it in the golang:1.13-alpine docker image, which is a multi-arch image and includes a armv6. (I have Docker configred to use QEMU for emulation and can run containers for other architectures.)

By merely cloning the git repo, and following the 4-step make process on the readme page (i.e. just do make all WHAT=cmd/component), all k8s components except kubelet were statically compiled and run as standalone executables on my pi zero, with no dependencies. (And if golang-alpine stops working, I can just bootstrap Arch Linux ARM from scratch, which should work fine for compiling.)

The only issue is that compiling kubelet still dynamically links to the system glibc library, and I haven't yet figured out how to fix that. I'm not a go programmer, and none of the compile flags I added for go or for gcc seemed to make a difference. (Kubelet has some C code I guess, because it needs gcc to compile.) I guess worst case I can bootstrap a docker image for every type of OS I run, so the glibc dynamic links will work, but I don't want to do that.

Debian still officially supports armel and has packages with a statically linked kubelet version, so my hacky solution currently is to just use their static binary from inside the armel deb package.

Lastly, you have to make your own repository with images that have these binaries (as well as the other versions), and configure kubeadm to pull those. And even more fun, although Docker runs on arm6, it incorrectly pulls arm7 images (a known bug for over 3 years), so you need to either change the arm7 image to just run the armel version, or make both arm6 and arm7 in the same image and just have the entry-point be a shell script that determines at runtime whether to launch the arm6 or arm7 program. Non-master nodes only need to run kubelet and kube-proxy, so those are probably the only images you need to do this for. (Another hack I've read about people using is pulling the correct image and then re-tagging it locally to be whatever image kubeadm wants to pull, so it will just use the local version.)

I'm actually just using ansible to setup k8s "the hard way", but I intend on still making compliant Docker images that can be drop-in replacements so kubeadm will work with them. If and when I can get kubelet to statically compile correctly, I will automate the process into a Dockerfile and stick the images on Docker Hub. Those images will have as many architectures as I can use, so ideally, we'll be able to use kubeadm on a multi-architecture cluster. E.g. amd64, arm64, arm6, and arm7. I estimate that full production Docker and K8s on Pi Zero's (as worker nodes) still leaves at least 50mb-100mb ram left for running small images. And if I strip down the kernel, I can probably free up another 30 or 40 megs. But that's far in the future. If I can get a single static page being served by an nginx container managed by K8s on my Pi Zero, I'm calling that a win for the time being.


Edit from Aug 7: I have managed to get everything working, and currently have a K8s cluster composed of arm6, arm7, arm8, and amd64. I will be making a write-up of my process sometime soon here, but for now, the important takeaway is to do a kubeadm install on an arm6 device as a worker node, you need binaries for kubeadm and kubelet, and only two containers, the pause container, and the kube-proxy container. You can build the binaries natively with buildx if you have QEMU, and just modify my Dockerfile. (Right now, that Dockerfile doesn't actually work completely -- the kube-controller-manager build keeps freezing up. But you can build kubelet, kubeadm, pause, kube-proxy, and the CNI plugins.)

Alternatively, you can pull the static binaries from the /usr/bin dir in the Arch packages I made for kubeadm and kubelet. I installed Arch Linux ARM on my Pi Zero, and so the CNI plugins were installed in my system by a package, but you can build them with my Dockerfile (or pull them from the Arch Linux ARM package) and then place the CNI binaries in the directory "/opt/cni/bin/" on your system. If you have those CNI binaries in that folder, and have kubelet installed and ready as a service, then you can just run kubeadm on the device and it should work fine. The only requirement is you need the correct kube-proxy and pause containers already available for your container engine.

On my Pi Zeroes, I have stock Docker installed, and I used the binaries I built from the Docker file, combined with analysis of the official K8s containers to build a compatible arm6 container for kube-proxy and pause. Specifying the Kubernetes version as v1.18.6 on kubeadm, required re-tagging those containers as "k8s.gcr.io/kube-proxy:v1.18.6" and "k8s.gcr.io/pause:3.2" respectively, but if those containers are already present and tagged correctly on your system, then kubeadm will succeed without complaint.

The only other issue is a working overlay network. I didn't want to go through more compilation hell, so I used Flannel, whose "arm" variant works on arm6 and arm7. You can install it with their defautl yaml file. However, you should add an env var for all sections called FLANNEL_MTU and set it to 1430 or lower. The default, 1500, causes some issues with metrics-server. Additionally, I combined all of Flannel's images into one multi-arch image if you want to use that. That will allow you to do what I did and strip the default yaml install file down to just one section.

With this "full" K8s installation using kubeadm and Docker CE, my Pi Zeroes idle at about 55% CPU usage, and have about 160MB memory free. If we assume I want to leave at least 25% for burst capacity, that still leaves about 20%, which equates to 200 millis. (Pi Zero has a single core 1GHz CPU.) To give some extra wiggle room, I rounded down and set my container request and limit to 120m, and RAM to 100MB. So far, everything works just fine. The only issue is heat, since my zeroes are all crammed together in a cute stackable case that doesn't have much air space.

(And of course, the manager node is not a Pi Zero, it's a Pi 4.)


Edit from Dec 1 2020: This will be my last update. In fact, there's not much to add. Kubeadm has a yaml configuration file, as do all of the other k8s components, none of which are all that well documented... but you can muddle through if you try.

One of the kubeadm options is to use a custom registry for your images, so you can make a multi-arch image and push it to a private registry and then use that for your setup rather than the hack of simply retagging an image in docker. This is what I have done in order to get rid of docker and just use straight containerd.

I still haven't figured out how to get the control plane components compiled for arm6. Both QEMU and native devices won't allow more than 1gb of ram, which is not sufficient for Go to compile most of the control plane. I am aware that Go theoretically can compile for other architectures, so I should be able to compile arm6 on my amd64 machine, using all of its ram. But for the life of me, I can't get that to work, so I'm left compiling things natively in either QEMU or on the devices themselves. Which means no arm6 control plane components.

But that's the only hiccup. Kubelet and kubeadm compile, and the pause container and kube-proxy containers likewise can be built with buildx. So it's still easy enough to get the worker node components working for arm6. If you are making a cluster with pi zeroes though, definitely read up on the kubelet configuration file in order to tweak it for resource usage. (Or, you know, use k3s or another lightweight distro rather than full stock k8s.)

I have binaries for old raspberries models published here https://github.com/aojea/kubernetes-raspi-binaries
They are created with a github actions job, so feel free to reuse it

Was this page helpful?
0 / 5 - 0 ratings