Kubernetes: Support the user flag from docker exec in kubectl exec

Created on 16 Aug 2016  ·  97Comments  ·  Source: kubernetes/kubernetes

It looks like docker exec is being used as the backend for kubectl exec. docker exec has the --user flag, which allows you to run a command as a particular user. This same functionality doesn't exist in Kubernetes.

Our use case is that we spin up pods, and execute untrusted code in them. However, there are times when after creating the pod, we need to run programs that need root access (they need to access privileged ports, etc).

We don't want to run the untrusted code as root in the container, which prevents us from just escalating permissions for all programs.

I looked around for references to this problem, but only found this StackOverflow answer from last year -- http://stackoverflow.com/questions/33293265/execute-command-into-kubernetes-pod-as-other-user .

There are some workarounds to this, such as setting up a server in the container that takes commands in, or defaulting to root, but dropping to another user before running untrusted code. However, these workarounds break nice Kubernetes/Docker abstractions and introduce security holes.

arekubectl sicli sinode

Most helpful comment

An additional use case - you're being security conscious so all processes running inside the container are not privileged. But now something unexpectedly isn't working and you want to go in as root to e.g. install debug utilities and figure out what's wrong on the live system.

All 97 comments

SGTM. @kubernetes/kubectl any thoughts on this?

It's not unreasonable, but we'd need pod security policy to control the user input and we'd probably have to disallow user by name (since we don't allow it for containers - you must specify UID).

@sttts and @ncdc re exec

Legitimate use-case

Any update on this?

My app container image is built using buildpacks. I'd like to open a shell. When I do, I am root, and all the env vars are set. But the buildpack-generated environment is not there. If I open a login shell for the app user (su -l u22055) I have my app environment, but now the kubernetes env vars are missing.

I thought su -l didn't copy env vars? You have to explicitly do the copy
yourself or use a different command.

On Tue, Oct 11, 2016 at 5:26 PM, Michael Elsdörfer <[email protected]

wrote:

My app container image is built using buildpacks. I'd like to open a
shell. When I do, I am root, and all the env vars are set. But the
buildpack-generated environment is not there. If I open a login shell for
the app user (su -l u22055) I have my app environment, but now the
kubernetes env vars are missing.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kubernetes/issues/30656#issuecomment-253085398,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p7sIu20xnja2HsbPUUgD1m4gXqVAks5qzCksgaJpZM4Jk3n0
.

@miracle2k - Have you tried su -m -l u22055? -m is supposed to preserve environment variables.

@adarshaj @smarterclayton Thanks for the tips. su -m has it's own issues (the home dir is wrong), but I did make it work in the meantime. The point though is - that's why I posted it here - is that I'd like to see "kubectl exec" do the right thing. Maybe even use the user that the docker file defines.

Here is an example how I need this functionality.

The official Jenkins image runs as the user Jenkins. I have a persistent disk attached that I need to resize. If kubectl had the --user I could bash in as root and resize2fs. Unfortunately without it it is an extreme pain.

An additional use case - you're being security conscious so all processes running inside the container are not privileged. But now something unexpectedly isn't working and you want to go in as root to e.g. install debug utilities and figure out what's wrong on the live system.

Installing stuff for debugging purposes is my use case as well. Currently I ssh into the nodes running kubernetes, and use docker exec directly.

What's the status on this? This functionality would be highly useful

I didn't check, but does the --as and --as-group global flags help here? Do they even work with exec? cc @liggitt

I didn't check, but does the --as and --as-group global flags help here? Do they even work with exec? cc @liggitt

No, those have to do with identifying yourself to the kubernetes API, not passing through to inform the chosen uid for the exec call

The lack of the user flag is a hassle. Use case is I have a container that runs as an unprivileged user, I mount a volume on it, but the volume folder is not owned by the user. There is no option to mount the volume with specified permissions. I can't use an entrypoint script to change the permissions because that runs as the unprivileged user. I can't use a lifecycle.preStart hook because that runs as the unprivileged user too. kubectl exec -u root could do that, if the '-u' option existed.

I guess though this should be an additional RBAC permission, to allow/block 'exec' as other than the container user.

Ideally the lifeCycle hooks should be able to run as root in the container, even when the container does not. Right now the best alternative is probably to run an init container against the same mount; kind of an overhead to start a separate container and mount volumes, when really I just need a one-line command as root at container start.

/sig cli

+1 for this feature. Not having this makes debugging things a lot more painful.

+1 for this feature. I have to rebuild my docker container and make sure the Docker file has USER root as the last line, then debug, then disable this.

docker command line seems to have a --user flag

johnjjung, if you have ssh access to the node you can connect to the container using docker with the user flag which might save you a bit of time.

Hmm, awesome let me try this

On Jul 10, 2017, 11:34 -0400, BenAbineriBubble notifications@github.com, wrote:

johnjjung, if you have ssh access to the node you can connect to the container using docker with the user flag which might save you a bit of time.

You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.

+1 really a issue, I have to ssh and then exec the docker exec, such annoying

/cc @frobware

+1 please. Don´t want to use docker exec -u as a workaround anymore.

+1 - kubectl exec is such a massive time saver over docker exec, that it makes me cry everytime I have to revert back to docker exec to specify the user.

considerations:

  • uid would need to be part of the exec interface for CRI, and all runtimes would need to support it
  • would need to figure out what the authorization check would be w.r.t. pod security policy restrictions on uids

+1 as this will prevent insecure workarounds we've unfortunately made a habit of (i.e., setting the runAsUser to root...and forgetting to revert the value when deploying public 😮 ).

+1

+1 for me, really need --user flag to login into a pod and make changes without redeploying the container with root as run user

This issue appears to have pretty strong support, is there anything that can be done to get it prioritized? (yes, yes, patches accepted :) )

+1 that would be handy to have this ability to exec with a defined user

I think implementation of this might be tricky because of security concerns? For example if the Dockerfile was initially set as a non-root user, should you allow it to execute commands as a root user now? -- How did docker's user flag handle this? If they handled it appropriately I think we just need to patch the kubectl exec command to pass this along..

Can anyone tell me if this is an appropriate strategy? Then I can get started on a PR

@johnjjung Yes, I believe the strategy here is patching kubectl exec to pass the docker --user flag through.

Also, the security concerns seem minor since we have implicit authentication from GCP in order to connect from kubectl.

Also, the security concerns seem minor since we have implicit authentication from GCP in order to connect from kubectl.

How so? There are controls over what user is allowed in the pod spec. I'd expect equivalent controls over any user id specified in an exec call. I have similar questions about whether an exec can be run as root when the original container was not. If so, then we can't expose that without thought as to how the pod security context and pod security policy controls would apply to this new option

@liggitt That is no different than setting the user in the Dockerfile itself, and yet, still being able to execute as root with Docker's user flag.

Docker's primary concern seems to be around preventing root access from _within_ a _running_ container, were it to be compromised. At some point, trust must be given to actually allow users to develop and work with the tool.

For example if the Dockerfile was initially set as a non-root user, should you allow it to execute commands as a root user now? -- How did docker's user flag handle this?

@johnjjung it's like @jordanwilson230 says, the docker runtime argument overrides the Dockerfile directive. Off the top of my head, this is how most runtime arguments work (like port numbers).

I have similar questions about whether an exec can be run as root when the original container was not

@liggitt Just to clarify, the role of the user id defined in the pod spec will not change and will continue to have the intended effect. The container (and it's processes) will be launched as that user. The issue raised here was to allow (already) authenticated users a manual option to exec with root -- or any user, mainly for debugging purposes.

The issue raised here was to allow (already) authenticated users a manual option to exec with root -- or any user, mainly for debugging purposes.

And my point is that the controls that we have today to prevent a user from running a container as root would need to apply here as well to prevent them from execing with root.

@liggitt I'm sorry, I don't quite know how to explain in a more clear way. Regarding your above comment, try this:

  • ssh onto the node running your pod (my pod is running Kafka, with the kafka user also set in the pod.spec)
    jordan@gke-my-default-pool-dsioiaag-i9f3 ~ $ docker exec -it -u root myKafkaContainer bash root@kafka-0:/# echo "I've exec'd into my container as $(whoami) despite defining the kafka user in pod.spec...." I've exec'd into my container as root despite defining the kafka user in pod.spec.... root@kafka-0:/#
    Having established that this is already possible from a GKE perspective (albeit in a seriously annoying way), what new security concerns are you thinking of? This issue is not reinventing the wheel, it's about providing a convenience.

Having established that this is already possible from a GKE perspective (albeit in a seriously annoying way), what new security concerns are you thinking of?

The issue is exposing power via the kubernetes API that is not there today. It is normal for a user to be allowed to run workloads via the API and not be allowed ssh access to the node.

It is great to have function via the API, as long as:

  1. cluster admins have the ability to control via policy whether it is allowed
  2. the way that policy is expressed is coherent with existing policy (in this case, it would ideally be done based on the same PodSecurityPolicy mechanism)
  3. the new function doesn't open a hole in a previously secured system (defaults off, or is gated based on existing policy)

@liggitt Absolutely right. So, having narrowed down the scope of the security implications to _node_ access, we (@johnjjung and others) can finally start to use this discussion for actionable substance. I've started a kubectl plugin for execing as root on the GKE platform. It will take a bit of time for me to get around to AWS and others. @johnjjung are you still willing to pick up on your end?

@liggitt Just saw the edit you made, that will be useful moving forward on this. Thanks.

For related discussion, see the proposal to allow running arbitrary containers, including separately specified user IDs, in an existing pod - https://github.com/kubernetes/community/pull/1269

The more the security aspects of what you want to run depart from the original container, the more important it is for the entire spec to be able to be validated by existing admission mechanisms as a coherent container

@jordanwilson230 Been reviewing the codebase, I found the points of integrations for kubectl exec commands, which is a good starting point, I'm not sure where I'd have to make changes in the other kubernetes codebase where we can allow the docker -u flags. That being said, I think I can start somewhere there, and circle back to this issue with cluster admins / pod security policies, etc... One step at a time.

If anyone know's where to make the changes for where kubernetes runs the docker --user flags, that'd be helpful.

Hi @johnjjung. Once upon a time (<1.6), Kubernetes kubelets did use Docker directly and could just pass options like this. Now there is a standard interface to support multiple runtimes, CRI. It is the capabilties of CRI that determine what instructions can be passed to the runtimes, like Docker.

Kubernetes tells the various container runtimes (dockershim+Docker+containerd, cri-containerd+containerd, rkt, cri-o, lxd, Frakti etc.) what to do using the CRI interface. Either directly for cri-o runtime implementations, or via a shim like dockershim or cri-containerd. So in order to be able to do what we want here (add processes to existing containers running as a different user than the container was started as), you first need to get the CRI spec extended to support that option (e.g. maybe ExecSync needs a uid option, and LinuxContainerSecurityContext needs to not prohibit it. I think @liggitt said as much above).

Then each of the container runtimes or runtime shims can go ahead and implement support for it. For Docker that means extending the dockershim implementation to support the addition to the CRI spec/interface. However I expect dockershim+Docker is going to be deprecated in favor of cri-containerd+containerd within a few more releases, so we are probably better to focus on cri-containerd.

image

http://blog.kubernetes.io/2017/11/containerd-container-runtime-options-kubernetes.html

https://github.com/kubernetes/community/blob/master/contributors/devel/container-runtime-interface.md

Also relevant to this discussion as the debug containers proposal. There are supposed to let you run a second container image in the same Pod space using a new kubectl debug command. Possibly the additional container can be run as a different user?

@whereisaaron your post is a great help. Thanks for writing that all out in detail.

@whereisaaron thanks for the details. If i'm understanding you, it seems like the debug proposal (if it goes through) might be the best place to put this at the end of the day. Not sure if it got approved to be worked on, but it basically allows you to attach a container of your choice to that pod to debug that pod (which sounds awesome), but does that allow you to change your user within the original container? Also, does that mean I should wait or go ahead with a patch using cri-containerd?

Seems like one of the reasons why this wasn't implemented yet was because there's multiple repos with multiple groups working on different areas.

@johnjjung I believe the debug container have been approved to be implemented as an 'alpha' feature in Kubernetes 1.9 (turned off unless you explicitly enable the feature). I don't think they made it into 1.9 though. So probably not until 1.10 or later.

As I understand it, debug containers run as a process inside the target Pod 'being debugged', so they are in the same kernel security context as the Pod, they just have their own container image/file system. So you can debug any processes, and mount the any volumes the Pod has. But since you are in the same security context, I wonder if you are stuck with the same kernel imposed limit on the uid you can use in the Pod or not? I'm not sure there, you'd have to ask the #sig-node people working on it.

Regarding extending CRI, I think you'd need a lot of support from #sig-node to do this, plus no objections from the various runtime projects, like cri-containerd, and cri-o, that might then have to implement support.

I've not had much time today, but for those running on GKE, I've made an ad-hoc kubectl plugin for exec'ing with the user [-u] flag:

https://github.com/jordanwilson230/kubectl-plugins

Feel free to modify/copy that plugin or all of them by running the install script. Just an ad-hoc solution until @johnjjung and others are able to take more on.

Have an alternative solution that is not specific to GCP. This does not use SSH at all and only requires kubectl working.

https://github.com/mikelorant/kubectl-exec-user

Does kubectl-exec-user’s pod verify the person using it has access to the container being requested to shell into?

It falls back to what access the kubernetes API allows. It will only work if RBAC allows a container to mount a nodes docker socket.

Consider my solution a creative way to break out of a container onto the node without requiring SSH. All the same security conditions need to be considered but at least it respects the kubernetes API server restrictions.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

This would be extremely welcome. It's not cool to be penalized for being security conscious and running all processes as non-root. (Rephrasing https://github.com/kubernetes/kubernetes/issues/30656#issuecomment-272055993)

/sig node

There seems to be a rebase-needing unfinished PR for this feature: https://github.com/kubernetes/kubernetes/pull/59092. Is there anyone who could pick it up and finalize? CC @louyihua

+1
(I run into the same issue as @SimenB - I'd like to install stuff for debugging purposes. Bye the way, thanks for the "use docker directly" hint )

While we are waiting for this to be properly supported, an intermediate solution can be to run your docker CMD with su-exec (either in Dockerfile or in K8s manifest). su-exec weighs only 20k (on alpine) and this way your app will run unprivileged, while still having root in kubectl exec.

+1

I would also appreciate such a -u flag. +1.

Just an idea:

For example something like a --conainer-type would be a big plus to enable passing any supported arguments directly to the underyling container implementation:

kubectl exec --container-type=docker -it -u 0 NAME

This would avoid having only a subset of the underlying functionality of the container runtime in kubectl. Furthermore it saves effort since there is no need to map and abstract the supported arguments from the kubelet layer all the way down to the container for every supported container type.

So in summary without the --container-type flag only the abstracted arguments from kubectl can be used and the underlying container type is completely transparent. With the flag, container specific arguments can be passed along. It's up to the user of kubectl whether he wants to bind to a container type or not.

BTW: Thanks to @SimenB for the hint to ssh into the node and use Docker directly. That solved my problem temporarily. Using Minikube I was able to do the following to log-in as root:
minikube ssh "docker exec -it -u 0 <container-id> bash"
Maybe this could be of help to someone.

A workaround script that automates the unpleasant. SSH access to the node required.

Usage:

```./shell-into-pod-as-root.sh [shell]
./shell-into-pod-as-root.sh podname
./shell-into-pod-as-root.sh podname sh

Enjoy!

!/usr/bin/env bash

set -xe

POD=$(kubectl describe pod "$1")
NODE=$(echo "$POD" | grep -m1 Node | awk -F'/' '{print $2}')
CONTAINER=$(echo "$POD" | grep -m1 'Container ID' | awk -F 'docker://' '{print $2}')

CONTAINER_SHELL=${2:-bash}

set +e

ssh -t "$NODE" sudo docker exec --user 0 -it "$CONTAINER" "$CONTAINER_SHELL"

if [ "$?" -gt 0 ]; then
set +x
echo 'SSH into pod failed. If you see an error message similar to "executable file not found in $PATH", please try:'
echo "$0 $1 sh"
fi
```

@Nowaker how do you handle namespaces?

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

I would also appreciate such a -u flag. +1.

Just an idea:

For example something like a --conainer-type would be a big plus to enable passing any supported arguments directly to the underyling container implementation:

kubectl exec --container-type=docker -it -u 0 NAME

This would avoid having only a subset of the underlying functionality of the container runtime in kubectl. Furthermore it saves effort since there is no need to map and abstract the supported arguments from the kubelet layer all the way down to the container for every supported container type.

So in summary without the --container-type flag only the abstracted arguments from kubectl can be used and the underlying container type is completely transparent. With the flag, container specific arguments can be passed along. It's up to the user of kubectl whether he wants to bind to a container type or not.

BTW: Thanks to @SimenB for the hint to ssh into the node and use Docker directly. That solved my problem temporarily. Using Minikube I was able to do the following to log-in as root:
minikube ssh "docker exec -it -u 0 <container-id> bash"
Maybe this could be of help to someone.

Yeah - it's trivial to just use the docker exec to do this - it's mostly about consistency - multi-user docker containers are a bit of a joke really - a legacy from converting a VM to a container.

I'm dealing with this with grafana at the moment - I suppose this will pass with time.

@bryanhuntesl There's discussion of workarounds above which doesn't require manual ssh'ing to a node. You can also try this plugin -- https://github.com/jordanwilson230/kubectl-plugins

What if you don't want to allow users to ssh into a node? Allowing users ssh access to a node, as well as allowing them to have access to docker, can be a security risk. Docker doesn't know anything about namespaces or k8s permissions. If a user can run docker exec, it can exec into pods of _any_ namespace.

SSH is not a proper solution, IMHO.

What if you don't want to allow users to ssh into a node? Allowing users ssh access to a node, as well as allowing them to have access to docker, can be a security risk. Docker doesn't know anything about namespaces or k8s permissions. If a user can run docker exec, it can exec into pods of _any_ namespace.

SSH is not a proper solution, IMHO.

I second that opinion - using an out of band mechanism to gain direct access is increasing the potential attack area.

What if you don't want to allow users to ssh into a node? Allowing users ssh access to a node, as well as allowing them to have access to docker, can be a security risk. Docker doesn't know anything about namespaces or k8s permissions. If a user can run docker exec, it can exec into pods of _any_ namespace.

SSH is not a proper solution, IMHO.

There are solutions that do not require SSH @gjcarneiro. Also, a user must first add their public SSH key in the Compute Metadata before they are allowed SSH access to a node (if on GCP) @bryanhuntesl.

@liggitt It's been three years since this topic started, any conclusions?

I am not sure if this solution has been mentioned before but what we did as a workaround is have all our containers include a script that'll log you in as the correct user. Plus a motd:

Dockerfile:

USER root
RUN echo "su -s /bin/bash www-data" >> /root/.bashrc
# this exit statement here is needed in order to exit from the new shell directly or else you need to type exit twice
RUN echo "exit" >> /root/.bashrc
# /var/www is www-data's home directory
COPY motd.sh /var/www/.bashrc

motd.sh:

RED='\033[0;31m'
YELLOW='\033[0;33m'

echo -e "${RED}"
echo "##################################################################"
echo "#        You've been automatically logged in as www-data.        #"
echo "##################################################################"
echo -e "${YELLOW} "
echo "If you want to login as root instead:"
echo -e "$(if [ "$KUBERNETES_PORT" ]; then echo 'kubectl'; else echo 'docker'; fi) exec -ti $(hostname) -- bash --noprofile -norc"

TEXT_RESET='\033[0m'

echo -e "${TEXT_RESET} "

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

use [exec-as] kubectl plugin:

kubectl krew install exec-as

As mentioned above this really needs a KEP and discussion about the security implications. It’s not that it’s necessarily a bad idea, it just has a significant impact to the system and needs a design before we can start coding.

I’d be happy to help review and shepherd a KEP for this, but it definitely has some gotchas and may take a while.

@miracle2k - Have you tried su -m -l u22055? -m is supposed to preserve environment variables.

@miracle2k I tried this (trying to exec as root user), but got No passwd entry for user '0'

$ su -m -l 0
No passwd entry for user '0'

Hello. To solve this issue, i developed "kpexec" CLI.
Please give your feedbacks.

On the node that runs the pod:

docker exec -u 0 -it \
    `kubectl -n NAMESPACE get pod \
          -l label=value \
          -o jsonpath='{range .items[*].status.containerStatuses[*]}{.containerID}{"\n"}{end}' | cut -d/ -f3` \
sh

@cristichiru For most of the clusters in which I've operated there is no direct shell access to the underlying Node. I suspect that's often the case for others as well.

In those cases, it seems that the other presented options here, like kubectl plugins might be the only way - assuming there is no access to the docker daemon neither.

+1

Well the KEP template is here https://github.com/kubernetes/enhancements/tree/master/keps/NNNN-kep-template

I figured I'd see how much work it is to write one and... yeah I'm not the person to write this, The template lost me at checklist item one **Pick a hosting SIG.** anyone more familiar with the process want to start the draft? I just want a place to stick my 👍 in support of the proposal as an active Kubernetes user.

This may sound flippant, but I see about a half dozen well coded / scripted / written workarounds to this issue, so clearly there _are_ people who are in a better position to draft proposed technical solutions than me.

I feel like Kubernetes became the new OpenStack where nothing can be achieved in a reasonable timeframe because of the PROCESS.

@VikParuchuri's original use case here is to be able to debug/troubleshoot containers as root, even though the container itself is running as an untrusted user. Good use case, because, if solved, it encourages us all to run containers as non-root users. 🎉

Before you prepare a KEP for docker exec have a quick check that k8s ephemeral debug containers don't address this use case for you.

docker exec --user is only one way to address that use case, and it relies on the docker runtime being used. As k8s moves to containerd, dockerd and friends are optional or not even installed any more, so it is possibly not a forward-looking option?

Another k8s-native way to address this use case is ephemeral debug containers. Say you have a container running as an untrusted user. Debug containers allow you to start a temporary container in the same process space as the target container, but running as root (or whoever). This approach has some significant advantages over the exec approaches, in particular you can bring any debug tooling and utilities you need with you in the image for the debug container. So instead of bloating your target container image with utils and editors etc. just in case you need to exec in (🐑 .., guilty!), you can instead have a nice big swiss army knife debug container image and keep your application images clean. You can use bash in your debug container when your target only has sh. You can even debug containers that have no shell to exec at all, like a single binary containers, or distroless containers.

E.g. use busybox to debug a container as root.

kubectl alpha debug -it ephemeral-demo --image=busybox --target=ephemeral-demo

I think this is arguably a better model, in that it treats the containers as isolated processes, that you 'attach' to for debugging, rather than like mini-VMs you shell into. The disadvantage is I don't think you can inspect the filesystem of the target, unless you can share an external mount or 'empty' mount. You share the process namespace with your target, so you can also access the target container's filesystem, via /proc/$pid/root.

Ephemeral debug containers have already navigated the PROCESS :-) and been implemented. These containers are alpha since ~1.16 and 1.18 kubectl includes the alpha debug command. More info here:

Refs:

Thanks for the thoughtful reply @whereisaaron :) I think that captures things quite well.

I figured I'd see how much work it is to write one and... yeah I'm not the person to write this, The template lost me at checklist item one Pick a hosting SIG. anyone more familiar with the process want to start the draft? I just want a place to stick my 👍 in support of the proposal as an active Kubernetes user.

KEPs can be quite daunting, but I want to provide a little context around them. Kubernetes itself is very large; potential changes have a very large blast radius, both for the contributor base and users. A new feature might seem easy to impliment but has the potential to broadly impact both groups.

We delegate stewardship of parts of the code base to SIGs; and it is through the KEPs that one or more of the SIGs can come to concensus on a feature. Depending on what the feature does, it may go through an API review, evaluated for scalability concerns etc.

All this is to ensure that what is produced has the greatest chance of success and is developed in a way that the SIG(s) would be willing to support it. If the orginal author(s) step away, the responsibility of maintaining it falls to the SIG. If say, a feature was promoted to stable and then flagged for deprecation, it'd be a minium of a year before it could be removed following the deprecation policy.

If there's enough demand for a feature, usually someone that's more familiar with the KEP process will offer to help get it going and shepherd it along, but it still needs someone to drive it.

In any case, I hope that sheds at least a bit of light on why there is a process associated with getting a feature merged. :+1: If you have any questions, please feel free to reach out directly.

The disadvantage is I don't think you can inspect the filesystem of the target, unless you can share an external mount or 'empty' mount.

For me inspecting the filesystem as root, and running utilities that can interact with filesystem as root, is the number one reason of wanting to get support for the requested feature. In short, this suggestion does not solve my problem at all.

The disadvantage is I don't think you can inspect the filesystem of the target

I was wrong about that, because your injected debug container shares the process namespace with your target container, you can access the filesystem of any process in the target container from your debug container. And that would include both the container filesystems and any filesystems mounted into those containers.

Container filesystems are visible to other containers in the pod through the /proc/$pid/root link.

https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/#understanding-process-namespace-sharing

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

kubectl alpha debug -it ephemeral-demo --image=busybox --target=ephemeral-demo

error: ephemeral containers are disabled for this cluster

@whereisaaron It looks like most cloud providers do not support this, and for on prem we can just go to a node and docker exec into the container. So again, the usefulness seems quite limited.

Also access via /proc/$pid/root is not what I'd like, I would like a direct access not via "side window". For example running utils like apt/apk _in the continer_ is not easy when the root filesystem is not where they expect it.

I had a similar problem: I needed to create some directories, links and add permission for the non-root user on an official image deployed by an official helm chart (jenkins).

I was able to solve it by using the exec-as plugin.

With planned Docker deprecation and subsequent removal, when will be this addressed? Ephemeral containers are still in alpha. What is the stable alternative without using Docker as CRI?

Besides being alpha, ephemeral containers is a lot more complicated to use than simply kubectl exec --user would be.

Another usecase for this is manually executing scripts in containers. For example, NextCloud's occ maintenance script requires to be ran as www-data. There is no sudo or similar in the image, and the doc advise to use docker exec -u 33 when in a Docker environment.

You can solve the problem with nextcloud by running

su -s /bin/bash www-data

But this is not ideal.

Was this page helpful?
0 / 5 - 0 ratings