controllerManagerExtraArgs, node-cidr-mask-size
BUG REPORT
kubeadm version (use kubeadm version
): v1.9.3
Environment:
kubectl version
): v1.9.3
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1576.4.0
VERSION_ID=1576.4.0
BUILD_ID=2017-12-06-0449
PRETTY_NAME="Container Linux by CoreOS 1576.4.0 (Ladybug)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"
uname -a
): Linux k8s-master 4.13.16-coreos-r2 #1 SMP Wed Dec 6 04:27:34 UTC 2017 x86_64 Intel(R) Xeon(R) CPU E3-1505M v5 @ 2.80GHz GenuineIntel GNU/LinuxI was specifying node-cidr-mask-size
in a kubeadm config file under controllerManagerExtraArgs
to set the flag. kube-controller-manager was started with 2 instances of --node-cidr-mask-size
being passed to it. The 2nd instance was the default that kubeadm adds and overrode the value I was attempting to set.
kube-controller-manager should have been started with one instance of --node-cidr-mask-size
or at least the one I specified would be the 2nd so that it would take precedence.
kubeadm init --config=myconfig.yaml
with the following as the contents of the file myconfig.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: "fd10::101"
networking:
serviceSubnet: fd30::0/110
podSubnet: "fd20:0::/120"
controllerManagerExtraArgs:
node-cidr-mask-size: "120"
This was an issue for me when trying to use kubeadm to setup an IPv6 cluster and specify a podSubnet.
still present in 1.9.6.
It should really be just an option under networking
key, currently it just puts --node-cidr-mask-size
in config twice, at start and at end
@tmjd I've worked around it by moving the networking.podSubnet
to controllerManagerExtraArgs.cluster-cidr
and adding allocate-node-cidrs
too, just like the kubeadm code would have if podSubnet
had been specified directly.
Modifying your original example, it becomes:
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: "fd10::101"
networking:
serviceSubnet: fd30::0/110
# MOVED BELOW podSubnet: "fd20:0::/120"
controllerManagerExtraArgs:
allocate-node-cidrs: "true"
cluster-cidr: "fd20:0::/120"
node-cidr-mask-size: "120"
This workaround is essentially circumventing kubeadm's if
statement here:
https://github.com/kubernetes/kubernetes/blob/86a58202b68d04b2e31b56db80b4d2a4dec77c93/cmd/kubeadm/app/phases/controlplane/manifests.go#L336-L342
I doesn't appear that kubeadm uses the podSubnet
for anything else other than validation but I could be wrong.
Would still prefer to be able to override the node cidr mask size in a more supported fashion though.
@tmjd Fix should land in next kubeadm release from master.
What is the status on this?
I am currently setting up a Kubernetes Cluster with:
kubeadm version: &version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:00:59Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
The Issue is still occurring here.
Most helpful comment
@tmjd I've worked around it by moving the
networking.podSubnet
tocontrollerManagerExtraArgs.cluster-cidr
and addingallocate-node-cidrs
too, just like the kubeadm code would have ifpodSubnet
had been specified directly.Modifying your original example, it becomes:
This workaround is essentially circumventing kubeadm's
if
statement here:https://github.com/kubernetes/kubernetes/blob/86a58202b68d04b2e31b56db80b4d2a4dec77c93/cmd/kubeadm/app/phases/controlplane/manifests.go#L336-L342
I doesn't appear that kubeadm uses the
podSubnet
for anything else other than validation but I could be wrong.Would still prefer to be able to override the node cidr mask size in a more supported fashion though.