Enhancements: IPv6 support added

Created on 1 Nov 2017  ·  99Comments  ·  Source: kubernetes/enhancements

Feature Description

  • One-line feature description (can be used as a release note): Adds support for IPv6, allowing full Kubernetes capabilities using IPv6 networking instead of IPv4 networking.
  • Primary contact (assignee): @danehans
  • Responsible SIGs: sig-network
  • Kubernetes Enhancement Proposal PR: #1139
  • Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred: @thockin @bowei @luxas
  • Approver (likely from SIG/area to which feature belongs): @thockin
  • Feature target (which target equals to which milestone):

    • Alpha release target 1.9

kinfeature sinetwork stagbeta trackeno

Most helpful comment

/milestone v1.18

All 99 comments

@danehans Thanks for filing this feature issue!
cc @idvoretskyi FYI

@danehans :wave: Please indicate in the 1.9 feature tracking board
whether this feature needs documentation. If yes, please open a PR and add a link to the tracking spreadsheet. Thanks in advance!

@zacharysarah I only have comment access to the 1.9 feature tracking board, so I added comments for the IPv6 docs requirement.

cc: @mmueen

@zacharysarah does the 1.9-release changelog need to be manually updated to reference any of the IPv6 PRs or https://github.com/kubernetes/kubernetes/issues/1443?

@danehans When you say changelog, do you mean the release notes?

/cc @Bradamant3 for release notes visibility

Yes, I am trying to understand if anything needs to be added to the 1.9 release notes, and if so, what process to follow. Thank you.

This should have a release note

/cc @Bradamant3 @nickchase Release note visibility! ☝️

@danehans 1.9 release note draft is here:
https://docs.google.com/document/d/1veHHyDH9VoNTwP6yy9u2Q7Jj1hnBJGVPv3NPHInImgU/edit

You can follow the guidance at the top of the doc.

xref: https://groups.google.com/forum/#!topic/kubernetes-sig-release/x6ySPIJkMN4 by @enisoc

@xiangpengzhao I have updated the 1.9 release notes with the ipv6 support details. Please let me know if add'l ipv6 content is required for the 1.9 release notes.

@danehans I think the details you added are good enough :+1: . But personally I'd like to see the associated PRs (if existing) for the bullets.

  • IPv6 alpha support has been added. Notable IPv6 support details include:

    • Support for IPv6-only Kubernetes cluster deployments. This feature does not provide dual-stack support.

    • Support for IPv6 Kubernetes control and data planes.

    • Support for Kubernetes IPv6 cluster deployments using kubeadm.

    • Support for the iptables kube-proxy backend using ip6tables.

    • Relies on CNI 0.6.0 binaries for IPv6 pod networking.

    • Although other CNI plugins support IPv6, only the CNI bridge and local-ipam plugins have been tested for the alpha release.

    • Adds IPv6 support for kube-dns using SRV records.

    • Caveats

    • HostPorts are not supported.

    • An IPv6 network mask for pod or cluster cidr network must be /66 or longer. For example: 2001:db1::/66, 2001:dead:beef::/76, 2001:cafe::/118 are supported. 2001:db1::/64 is not supported

      ```

@danehans I took another look at the release note and found that you put the ipv6 details into the section Before Upgrading. I don't think we should put them there.

@xiangpengzhao The only concern I have is that several PRs were used for the different bullets.

@xiangpengzhao I have moved the IPv6 content for the 1.9 release notes. Please let me know if you have any further concerns.

@danehans That LGTM :)

I use Kubernetes on IPv6-only (v1.8.x) for a while now,
and the biggest problem to solve, I think, Is to detect if IPv6 is enabled to stop to use ClusterIPs (10.32.x.x) on clusters.

@valentin2105 can you please open an issue in k/k to track this if you think it is an issue should be solved?
https://github.com/kubernetes/kubernetes/issues

@danehans
Any plans for this in 1.11?

If so, can you please ensure the feature is up-to-date with the appropriate:

  • Description
  • Milestone
  • Assignee(s)
  • Labels:

    • stage/{alpha,beta,stable}

    • sig/*

    • kind/feature

cc @idvoretskyi

@leblancd is leading the IPv6 charge. I will let him comment.

@justaugustus - This should probably be broken into 2 separate issues:
IPv6-Only support: Release 1.9, Alpha
Dual-Stack Support: Release 1.11, Alpha
I think this issue (#508) is sufficient for IPv6-Only support, and a new Issue will be needed for dual-stack.

/kind feature

@leblancd

  • Is there any work being planned for IPv6-only support in the 1.11 release? If so, can you let us know if it's tracking alpha, beta or stable, so we can set the milestone?
  • Would you mind opening an issue with the appropriate details for Dual-Stack support?

@justaugustus
IPv6-only work fine on v1.9 & v1.10 releases and in dual stack too.

@justaugustus :
Dual-Stack feature issue

This feature current has no milestone, so we'd like to check in and see if there are any plans for this in Kubernetes 1.12.

If so, please ensure that this issue is up-to-date with ALL of the following information:

  • One-line feature description (can be used as a release note):
  • Primary contact (assignee):
  • Responsible SIGs:
  • Design proposal link (community repo):
  • Link to e2e and/or unit tests:
  • Reviewer(s) - (for LGTM) recommend having 2+ reviewers (at least one from code-area OWNERS file) agreed to review. Reviewers from multiple companies preferred:
  • Approver (likely from SIG/area to which feature belongs):
  • Feature target (which target equals to which milestone):

    • Alpha release target (x.y)

    • Beta release target (x.y)

    • Stable release target (x.y)

Set the following:

  • Description
  • Assignee(s)
  • Labels:

    • stage/{alpha,beta,stable}

    • sig/*

    • kind/feature

Once this feature is appropriately updated, please explicitly ping @justaugustus, @kacole2, @robertsandoval, @rajendar38 to note that it is ready to be included in the Features Tracking Spreadsheet for Kubernetes 1.12.


Please note that Features Freeze is tomorrow, July 31st, after which any incomplete Feature issues will require an Exception request to be accepted into the milestone.

In addition, please be aware of the following relevant deadlines:

  • Docs deadline (open placeholder PRs): 8/21
  • Test case freeze: 8/28

Please make sure all PRs for features have relevant release notes included as well.

Happy shipping!

P.S. This was sent via automation

Hi @leblancd

Is there a plan to make IPv6 support beta in K8s 1.12? If you remember, I had asked about dual-stack support status in https://github.com/kubernetes/features/issues/563. As the dual stack still has more work to done, I am trying to figure out if we can only live with IPv6 support as of now, but its still in alpha, so in case you can share a tentative date on when IPv6 can become beta/GA, it will be really helpful.

@navjotsingh83 Which point is missing for you about IPv6 in Kubernetes ?

Hi @valentin2105

We have not configured and used IPv6 in K8s, yet. But before going into that, why I posted this question is because it is still an alpha feature, so even it works (which it might), we cannot have alpha features in production. We are now in PoC/planning phase on whether we should deploy our app on K8s or not in the next release, so based on when the feature will become beta (atleast) or GA (preferred), we will take a go/nogo decision.

Hi @navjotsingh83 - IPv6-only support should be Beta in K8s 1.13. What's missing for the IPv6-only feature to be considered Beta is Kubernetes IPv6-only CI, and this is in the works. Here is the initial proposal for a K8s CI (using a virtualized multinode cluster in a GCE environment): https://github.com/kubernetes/test-infra/pull/7529. This CI proposal has traction, but I was requested by the test-infra group to change this from using a GCE-based cluster to deploying a multinode cluster directly in a Prow container (to eliminate dependency on GCE operations). This results in a Docker-in-Docker-in-Docker architecture, which has been a bit challenging to get working. I expect to have a new CI PR out within a week that runs inside a local Prow container, but then that will need another round of reviews before getting merged.

Is there anything specific re. dual-stack support that you need? In other words, if IPv6-only support was beta/GA, would that be sufficient? I'm interested in hearing if what we've proposed in the dual-stack spec is on track for what you need.

Hi @leblancd @danehans
This enhancement has been tracked before, so we'd like to check in and see if there are any plans for this to graduate stages in Kubernetes 1.13. This release is targeted to be more ‘stable’ and will have an aggressive timeline. Please only include this enhancement if there is a high level of confidence it will meet the following deadlines:
Docs (open placeholder PRs): 11/8
Code Slush: 11/9
Code Freeze Begins: 11/15
Docs Complete and Reviewed: 11/27

Please take a moment to update the milestones on your original post for future tracking and ping @kacole2 if it needs to be included in the 1.13 Enhancements Tracking Sheet

We are also now encouraging that every new enhancement aligns with a KEP. If a KEP has been created, please link to it in the original post or take the opportunity to develop a KEP.

Thanks!

@leblancd thanks for the update here, very interesting. Glad to see this is finally coming to fruition with 1.13.

Hello,

I was wondering, what the current supposed state of IPv6 support is? If I try to bootstrap a cluster using

kubeadm init --pod-network-cidr 2a0a:e5c0:102:3::/64 --apiserver-advertise-address=2a0a:e5c0:2:12:400:f0ff:fea9:c401 --service-cidr 2a0a:e5c0:102:6::/64

The result is that the API server is not reachable afterwards:

root@ubuntu:/etc/kubernetes/manifests# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Checking the created containers:

root@ubuntu:/etc/kubernetes/manifests# docker ps 
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS               NAMES
a55623e52447        k8s.gcr.io/pause:3.1   "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD_kube-apiserver-ubuntu_kube-system_fec7f583ea75dd4fc232913538c9fba1_0
cefb94378d33        ab81d7360408           "kube-scheduler --ad…"   24 minutes ago      Up 24 minutes                           k8s_kube-scheduler_kube-scheduler-ubuntu_kube-system_44b569a35761491825f4e7253fbf0543_0
c569ef8d9e30        26e6f1db2a52           "kube-controller-man…"   24 minutes ago      Up 24 minutes                           k8s_kube-controller-manager_kube-controller-manager-ubuntu_kube-system_fe38083b94da6f6c5a89788091e3bcb6_0
a25693b556e5        3cab8e1b9802           "etcd --advertise-cl…"   24 minutes ago      Up 24 minutes                           k8s_etcd_etcd-ubuntu_kube-system_7db86297afa09dfaa5049a791ed76555_0
9e85d0f7873d        k8s.gcr.io/pause:3.1   "/pause"                 24 minutes ago      Up 24 minutes                           k8s_POD_kube-scheduler-ubuntu_kube-system_44b569a35761491825f4e7253fbf0543_0
d6516a6656a7        k8s.gcr.io/pause:3.1   "/pause"                 24 minutes ago      Up 24 minutes                           k8s_POD_kube-controller-manager-ubuntu_kube-system_fe38083b94da6f6c5a89788091e3bcb6_0
8dab4c0348a9        k8s.gcr.io/pause:3.1   "/pause"                 24 minutes ago      Up 24 minutes                           k8s_POD_kube-apiserver-ubuntu_kube-system_84183f750feaa89bfaa9d456805fdc7a_0
b561f8c07ff7        k8s.gcr.io/pause:3.1   "/pause"                 24 minutes ago      Up 24 minutes                           k8s_POD_etcd-ubuntu_kube-system_7db86297afa09dfaa5049a791ed76555_0

There seems to be no port mapping for 8080 - shouldn't there be one?

Hi @telmich ,

I don't know a lot about KubeADM, but what I know is that IPv6 on Kubernetes work just fine.

As I see your command, I will suggest to you to use bracket arround your v6 addresses like [2a0a:e5...]

Hey @valentin2105 !

It's great to hear that IPv6 should work, but how would I bootstrap a kubernetes cluster without kubeadm?

Re [] syntax: this is usually used for a single ipv6 address, not for ranges and kubeadm fails right away when using it:

root@k8s1:~# kubeadm init --pod-network-cidr '[2a0a:e5c0:102:3::/64]' --service-cidr '[2a0a:e5c0:102:6::/64]'
[serviceSubnet: Invalid value: "[2a0a:e5c0:102:6::/64]": couldn't parse subnet, podSubnet: Invalid value: "[2a0a:e5c0:102:3::/64]": couldn't parse subnet, KubeProxyConfiguration.ClusterCIDR: Invalid value: "[2a0a:e5c0:102:3::/64]": must be a valid CIDR block (e.g. 10.100.0.0/16)]
root@k8s1:~# 

I took the time to write down my findings so far on https://redmine.ungleich.ch/issues/6255 and currently I claim that there is no way to setup an IPv6 only cluster using kubeadm at the moment.

Given that I am a k8s newbie, I wonder 2 things:

  • Am I wrong about my claim?
  • If I am right abotu my claim, how do I nonetheless setup a kubernetes cluster in IPv6 only mode?

For the last question: I already had a look at the different phases of kubeadm and tried to replicate the init by doing phase-by-phase, but what is unclear to me, when/how to modify the options for etcd, apiserver, controller so that I can control it more fine grained?

You should debug which component crash while you launch them with kubeadm, maybe it's about CNI ? I don't know if host-local bridge support IPv6 in cni.

If you want try to setup v6 only and maybe compare with your kubeadm setup , check my Saltstack recipe that work in Ipv6 as I use it in few v6-only clusters.
https://github.com/valentin2105/Kubernetes-Saltstack

@danehans Hello - I’m the enhancement’s lead for 1.14 and I’m checking in on this issue to see what work (if any) is being planned for the 1.14 release. Enhancements freeze is Jan 29th and I want to remind that all enhancements must have a KEP

@claurence No IPv6 work is being planned for 1.14.

Hello @danehans, I'm the Enhancement Lead for 1.15. Is this feature going to be graduating alpha/beta/stable stages in 1.15? Please let me know so it can be tracked properly and added to the spreadsheet. All enhancements require a KEP before being promoted as well.

Once coding begins, please list all relevant k/k PRs in this issue so they can be tracked properly.

@thockin @BenTheElder I think it can be feasible to graduate ipv6 only clusters as beta in 1.15.
If we merge this https://github.com/kubernetes-sigs/kind/pull/348 I can work on the failing tests during this cycle https://github.com/kubernetes/kubernetes/issues/70248 and add a job to the CI
What do you think?

@kacole2 Unfortunately, I am no longer working on IPv6.

Is anyone working on native (non dual stack) IPv6 support in k8s?

@telmich This should be possible today. Take a look at https://github.com/leblancd/kube-v6 for a good walkthrough of the current state.

Hi @danehans - I'm an Enhancements shadow for 1.16.

Is this feature going to be graduating alpha/beta/stable stages in 1.16? Please let me know so it can be added to the 1.16 Tracking Spreadsheet.

Once development begins or if it already has, please list all relevant k/k PRs in this issue so they can be tracked properly.

I noticed there's no KEP linked in the issue description; as a reminder, every enhancement requires a KEP in an implementable state with Graduation Criteria explaining each alpha/beta/stable stages requirements.

As a reminder, 1.16 milestone dates are: Enhancement Freeze 7/30 and Code Freeze 8/29.

Thanks!

@mariantalla I am no longer working on the feature. You may want to ask sig network to see if anyone else is planning to handle the feature graduation.

@lachie83 this is something to bring up with SIG-Network in your meetings.

Yep. Let me get this on the SIG-networking agenda

I will submit a PR with a KEP for graduating IPv6 to Beta during this cycle

Thanks @aojea , I'll add both (this and https://github.com/kubernetes/enhancements/issues/1138) as tracked for v1.16, targetting beta, and At Risk while KEP is not merged.

Are you good with me assigning you as an owner for this issue too, and unassigning @danehans to save his inbox?

@mariantalla 👍

Hey, @aojea I'm the v1.16 docs release lead.

Does this enhancement (or the work planned for v1.16) require any new docs (or modifications)?

Just a friendly reminder we're looking for a PR against k/website (branch dev-1.16) due by Friday,August 23rd. It would be great if it's the start of the full documentation, but even a placeholder PR is acceptable. Let me know if you have any questions!

Hey, @aojea I'm the v1.16 docs release lead.

Does this enhancement (or the work planned for v1.16) require any new docs (or modifications)?

Just a friendly reminder we're looking for a PR against k/website (branch dev-1.16) due by Friday,August 23rd. It would be great if it's the start of the full documentation, but even a placeholder PR is acceptable. Let me know if you have any questions!

@neolit123 @timothysc I think I can add a section to kubeadm docs https://github.com/kubernetes/website/tree/master/content/en/docs/setup/production-environment/tools/kubeadm like Configuring your Kubernetes cluster to use IPv6, what do you think? does it work for you?

@aojea
depends on what you plan to add to the docs?

these sections already mentions that ipv6 is supported by kubeadm:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#initializing-your-control-plane-node
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

/assign @aojea
/unassign @danehans

Hey @aojea - just a quick reminder that Enhancements Freeze 🥶is tomorrow. This enhancement is at risk at the moment, because its KEP is not merged yet.

I believe the KEP is being tracked in #1138 . Can we collapse that into this issue?

these sections already mentions that ipv6 is supported by kubeadm:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#initializing-your-control-plane-node
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

@simplytunde seems that the user-facing documentation is already covered

seems that the user-facing documentation is already covered

i wouldn't say kubeadm is the only user facing documentation for IPv6 support, but will defer to SIG Network and the maintainers of this feature.

@aojea @danehans @lachie83
Enhancement Freeze has passed for 1.16. The KEP at #1139 was never merged and now this is being removed from the 1.16 milestone. If this would like to be re-added, please file an exception and it will require approval from the release lead.

/milestone clear

@kacole2 thanks for following up, let's target 1.17 then.

Hey there @aojea -- 1.17 Enhancements lead here. I wanted to check in and see if you think this Enhancement will be graduating to alpha/beta/stable in 1.17?

The current release schedule is:

Monday, September 23 - Release Cycle Begins
Tuesday, October 15, EOD PST - Enhancements Freeze
Thursday, November 14, EOD PST - Code Freeze
Tuesday, November 19 - Docs must be completed and reviewed
Monday, December 9 - Kubernetes 1.17.0 Released

If you do, once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍

Thanks!

@mrbobbytables according to the KEP there is only one missing thing that's the CI job in a cloud provider

IPv6 Graduating to beta

Graduation Criteria

Awesome. Will add it to be tracked as graduating to beta 👍

Hello @aojea I'm one of the v1.17 docs shadows.
Does this enhancement for (or the work planned for v1.17) require any new docs (or modifications to existing docs)? If not, can you please update the 1.17 Enhancement Tracker Sheet (or let me know and I'll do so)

If so, just a friendly reminder we're looking for a PR against k/website (branch dev-1.17) due by Friday, November 8th, it can just be a placeholder PR at this time. Let me know if you have any questions!

@irvifa
Do you mind updating the enhancement tracker sheet?
We won't require more docs as explained here https://github.com/kubernetes/enhancements/issues/508#issuecomment-516064858

Okay, thanks for the confirmation @aojea . I've updated the tracking sheet as requested..

Hey there @aojea , 1.17 Enhancements lead here 👋 It doesn't look like there are any k/k PRs or the like that are outstanding, but how are things looking regarding the last task? "_It has CI using at least one Cloud Provider_"

Hey there @aojea , 1.17 Enhancements lead here It doesn't look like there are any k/k PRs or the like that are outstanding, but how are things looking regarding the last task? "_It has CI using at least one Cloud Provider_"

it's WIP https://github.com/kubernetes-sigs/cluster-api-provider-aws/pull/1322 , but I can't guarantee I can make it :man_shrugging:

👋 Hey there @aojea. Code freeze is at 5pm PT today for the 1.17 release cycle.
Do you think https://github.com/kubernetes-sigs/cluster-api-provider-aws/pull/1322 will be merged by then? 😬

I know it's not a part of k/k and not exactly subject to the freeze, but ideally we'd have everything in by then.

let's target 1.18 @mrbobbytables , this has a lot of unknowns and I can't dedicate enough time :man_shrugging:

Will do! Thanks for the quick reply 👍
/milestone v1.18

Hi @aojea -- 1.18 Enhancements shadow here. I wanted to check in and see if you think this Enhancement will be graduating to [alpha|beta|stable] in 1.18?
The current release schedule is:
Monday, January 6th - Release Cycle Begins
Tuesday, January 28th EOD PST - Enhancements Freeze
Thursday, March 5th, EOD PST - Code Freeze
Monday, March 16th - Docs must be completed and reviewed
Tuesday, March 24th - Kubernetes 1.18.0 Released
To be included in the release, this enhancement must have a merged KEP in the implementable status. The KEP must also have graduation criteria and a Test Plan defined.
If you would like to include this enhancement, once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍
We'll be tracking enhancements here: http://bit.ly/k8s-1-18-enhancements
Thanks!

hi @kikisdeliveryservice
The KEP was merged https://github.com/kubernetes/enhancements/pull/1139 and is implementable

Only one item is missing:

It has CI using at least one Cloud Provider

and we'll be able to graduate IPv6 to beta in 1.18

cc: @lachie83 @aramase

thanks @aojea !

Hey @aojea -

Seth here, Docs shadow on the 1.18 release team.

Does this enhancement work planned for 1.18 require any new docs or modifications to existing docs?

If not, can you please update the 1.18 Enhancement Tracker Sheet (or let me know and I'll do so)

If doc updates are required, reminder that the placeholder PRs against k/website (branch dev-1.18) are due by Friday, Feb 28th.

Let me know if you have any questions!

@sethmccombs one question, does a blog post about the feature counts as a doc update?

Hi @aojea !

As a reminder that the Code Freeze is Thursday 5th March. Can you please link all the k/k PRs or any other PRs that should be tracked for this enhancement?

Thanks!
The 1.18 Enhancements Team

@aramase do you have a link to track the IPv6 job on Azure, so the enhancements team can track the feature?

@aojea - The website repo holds blog posts but the release process for them is a little different vs regular docs, I can get more info, (CC-ing @karenhchu as comms lead)

hey @aojea @aramase could you link to that PR for the IPv6 job on Azure for us?

@jeremyrickard I've created the placeholder PR in test-infra for the job - https://github.com/kubernetes/test-infra/pull/16461

Other PRs that'll need to be merged before -
https://github.com/kubernetes/kubernetes/pull/88448
https://github.com/Azure/aks-engine/pull/2781

I'm finishing up testing and will then remove the WIPs.

Hi @aojea @aramase
As the docs placeholder PR deadline is tomorrow. If this enhancement needs docs. Please raise a placeholder PR against dev-1.18 branch asap.

Thanks!

/milestone clear

Hi, @aojea @aramase as there is no docs placeholder PR for this enhancement and didn't receive any update on the docs front. we have crossed the docs placeholder PR deadline. So we are removing this enhancement from the 1.18 release. If you want to request an exception. Please refer to https://github.com/kubernetes/sig-release/blob/master/releases/EXCEPTIONS.md

Thanks!

Sorry, there is no doc update needed :smile:

/milestone v1.18

Which IPv6 feature landed into Kubernetes 1.18? Is it dual stack or IPv6 only graduated to beta?
I'm confused because nothing changed in the docs.

Which IPv6 feature landed into Kubernetes 1.18? Is it dual stack or IPv6 only graduated to beta?
I'm confused because nothing changed in the docs.

This issue tracks ipv6 only, graduated to beta in 1.18

Dual stack is alpha and is tracked in other issue/KEP

/milestone clear

(removing this enhancement issue from the v1.18 milestone as the milestone is complete)

Hi @aojea @danehans ,

1.19 Enhancements shadow here. I wanted to check in and see if you think this Enhancement will be graduating in 1.19?

In order to have this part of the release:

The KEP PR must be merged in an implementable state
The KEP must have test plans
The KEP must have graduation criteria.

The current release schedule is:

Monday, April 13: Week 1 - Release cycle begins
Tuesday, May 19: Week 6 - Enhancements Freeze
Thursday, June 25: Week 11 - Code Freeze
Thursday, July 9: Week 14 - Docs must be completed and reviewed
Tuesday, August 4: Week 17 - Kubernetes v1.19.0 released

Please let me know and I'll add it to the 1.19 tracking sheet (http://bit.ly/k8s-1-19-enhancements). Once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍

Thanks!

Thanks @kikisdeliveryservice , but I think we should focus now in dual stack.
No change here during this release

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Hi @aojea @danehans

Enhancements Lead here. Any plans to graduate this in 1.20?

Thanks,
Kirsten

Hi @aojea @danehans

Enhancements Lead here. Any plans to graduate this in 1.20?

nope :smile:

thanks for the update!

Is there any documentation on how to verify ipv6 only operations ? google only lead me to ipv4 or ipv6 dualstack.

Is there any documentation on how to verify ipv6 only operations ? google only lead me to ipv4 or ipv6 dualstack.

what do you mean by "verify"?

The installation for IPv6 only is the same as in IPv4, just you need to use IPv6 addresses and subnets in your configuration, no additional changes needed.
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#initializing-your-control-plane-node

Can confirm. I run 1.17.x in ipv6-only mode. Just follow the IPv4 guide and use IPv6 addresses. That’s basically it.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

boynux picture boynux  ·  3Comments

andrewsykim picture andrewsykim  ·  12Comments

euank picture euank  ·  13Comments

dekkagaijin picture dekkagaijin  ·  9Comments

liggitt picture liggitt  ·  7Comments