Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unexpected file permission error in container #783

Closed
astraw opened this issue Jun 1, 2013 · 80 comments · Fixed by #11799
Closed

unexpected file permission error in container #783

astraw opened this issue Jun 1, 2013 · 80 comments · Fixed by #11799
Labels
area/docs area/storage/aufs exp/beginner kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. kind/enhancement Enhancements are not bugs or new features but can improve usability or performance.

Comments

@astraw
Copy link

astraw commented Jun 1, 2013

I've narrowed down a problem in an originally more involved setup. Consider the following Dockerfile:

# Dockerfile
FROM      ubuntu:12.10

RUN apt-get install -y puppetmaster sudo

RUN rm -rf /etc/puppet
ADD puppet-config /etc/puppet
RUN chown -R puppet.puppet /etc/puppet
RUN chmod 755 /etc/puppet

When run with the following:

# make a dummy directory
mkdir puppet-config
echo "hi" >puppet-config/hello.txt

docker build -t dockbug .

echo "note the directory is owned by puppet with full read/write/execute privs"
docker run dockbug ls -al /etc/puppet

echo "but we get a permission error here"
docker run dockbug sudo -u puppet ls -al /etc/puppet

I see an unexpected permission error in the final command. This is with Docker 0.3.4 from the PPA on Ubuntu 13.04 with kernel 3.8.0-19-generic. Interestingly, if I remove the like "RUN rm -rf /etc/puppet" from the Dockerfile, I no longer see the permission error.

@ghost ghost assigned jpetazzo Jun 1, 2013
@creack
Copy link
Contributor

creack commented Jun 1, 2013

@jpetazzo can you take a look at this one?

@jpetazzo
Copy link
Contributor

This is a kind of bug in AUFS. When a directory has a given permission mask in a lower layer, the upper layers cannot have a broader mask. Well, they can, but the more restrictive permission mask will be enforced anyways.

The rationale is the following:

  • suppose that you have directory /secret with permissions 0700, containing file /secret/key.pem
  • in an upper layer, you give /secret permissions 0755
  • now /secret/key.pem could become reachable

Multiple behaviors could be considered "acceptable" in this scenario:

  • give access to the file anyway (but this option was vetoed because it was deemed insecure)
  • prevent access
  • place a kind of "tombstone" or "opaque whiteout" for the whole directory so that the directory below becomes "opaqued" or "whited out", and the new one takes precedence

My understanding is that the last solution should be used, but for some reason, AUFS doesn't behave correctly. It might be because the directory exists in a lower layer, then doesn't exist anymore (because of the rm), then exists again (because of the ADD).

I'm willing to take a guess: the logic that decides whether or not to do a whiteout is not exactly the same as the one looking up permissions; so the first one stops when /etc/puppet is marked as non-existent in the middle layer, while the latter goes bottom up.

Anyway!

As a workaround, you can rm /etc/puppet/* instead of rm /etc/puppet, and that will do the trick.

@shykes
Copy link
Contributor

shykes commented Aug 13, 2013

Labelling as aufs-related.

@shykes
Copy link
Contributor

shykes commented Aug 13, 2013

Since this is documented aufs behavior, we can 1) close this as wontfix, 2) close + document the behavior in the docker docs, or 3) other?

@shykes
Copy link
Contributor

shykes commented Aug 13, 2013

@jpetazzo what do you think?

@jpetazzo
Copy link
Contributor

Hmmm... What about a "KNOWN BUGS AND ISSUES" section in the documentation? /cc @metalivedev

@metalivedev
Copy link
Contributor

Ok, some discussion here: https://botbot.me/freenode/docker-dev/msg/6889335/
I'll look at how to provide the known issues in context, which implies that the docs need some place to describe the file system in use.

@ghost ghost assigned metalivedev Oct 14, 2013
@xaviershay
Copy link

This bit me.

@metalivedev
Copy link
Contributor

@xaviershay Could you give me the steps to reproduce the problem? I haven't been able to see an error with @astraw 's steps.

vagrant@precise64:~/src/783$ docker version
Client version: 0.6.4
Go version (client): go1.1.2
Git commit (client): 2f74b1c
Server version: 0.6.4
Git commit (server): 2f74b1c
Go version (server): go1.1.2
Last stable version: 0.6.4

vagrant@precise64:~/src/783$ docker build -t dockbug .
Uploading context 10240 bytes
Step 1 : FROM ubuntu:12.10
Pulling repository ubuntu
...
Step 6 : RUN chmod 755 /etc/puppet
 ---> Running in cf062fd724fb
 ---> 17620a2e9ea7
Successfully built 17620a2e9ea7

vagrant@precise64:~/src/783$ echo "note the directory is owned by puppet with full read/write/execute privs"
note the directory is owned by puppet with full read/write/execute privs

vagrant@precise64:~/src/783$ docker run dockbug ls -al /etc/puppet
total 12
drwxr-xr-x  2 puppet puppet 4096 Oct 19 00:18 .
drwxr-xr-x 78 root   root   4096 Oct 19 00:19 ..
-rw-rw-r--  1 puppet puppet    3 Oct 19 00:18 hello.txt

vagrant@precise64:~/src/783$ echo "but we get a permission error here"
but we get a permission error here

vagrant@precise64:~/src/783$ docker run dockbug sudo -u puppet ls -al /etc/puppet
total 12
drwxr-xr-x  2 puppet puppet 4096 Oct 19 00:18 .
drwxr-xr-x 78 root   root   4096 Oct 19 00:19 ..
-rw-rw-r--  1 puppet puppet    3 Oct 19 00:18 hello.txt

# HMM, no permission error

@xaviershay
Copy link

Having trouble replicating again.

The shape of my setup was building an image on top of itself (which is potentially a terrible idea):

# Dockerfile.bootstrap
FROM ubuntu:12.10
# Dockerfile
FROM thisimage

RUN rm -Rf /app
RUN mkdir /app
docker build -t thisimage - < Dockerfile.boostrap
docker build -t thisimage - < Dockerfile
docker build -t thisimage - < Dockerfile # Do this a couple of times.

@tianon
Copy link
Member

tianon commented Oct 19, 2013

That looks to me like you'd run into the AUFS 42 layer limit pretty quickly.

@xaviershay
Copy link

Yeah that was the next bug I hit :)

Have since redone my process to always build off a single base image.

@jpetazzo
Copy link
Contributor

Since Docker is moving away from AUFS to use DM in the next major release, this bug won't appear anymore.
Therefore, I'm closing it.
Feel free to reopen if you think it should still be open for a specific reason, though!

@tianon
Copy link
Member

tianon commented Nov 20, 2013

We're still going to have AUFS in 0.7, so I'm reopening this. :)

@dergachev
Copy link

Just got bit by this too. We've got sequential builds, and at several points we need to obliterate a directory that was ADDed in a previous build step. I was surprised that the following command silently fails:

RUN rm -Rf /var/shared/sites/coursecal

I'm not 100% sure why, but the workaround suggested by @jpetazzo above seems to work:

RUN rm -Rf /var/shared/sites/coursecal/* /var/shared/sites/coursecal/.*git

The only reason we have to do the "rm" in the first place is because the following line is really a tar x, and it leaves existing files around, as documented here

ADD . /var/shared/sites/coursecal

@jameshfisher
Copy link
Contributor

The above discussion from several months ago at version 0.7 suggests that DeviceMapper became the recommended backend. But now we're on 0.9.1 and AUFS still seems to be the default backend. For end-users like me, default=recommended. This means docker is inconsistent.

What should I be using? Also, where is the documentation on AUFS vs DeviceMapper vs whatever else? I don't see any.

@jameshfisher
Copy link
Contributor

Also @jpetazzo

Docker is moving away from AUFS to use DM in the next major release

What is the next major release? Did you mean 1.0?

@shykes
Copy link
Contributor

shykes commented Mar 26, 2014

James, aufs is the default if your system supports it. This is simply because it is the most battle-tested storage driver and has less moving parts in general.

Devmapper is the default for non-aufs systems, which is the majority of systems in the wild since aufs is not part of the upstream kernel.

In doubt, we recommend using the default - keeping in mind the default might be different from system to system.

On Wed, Mar 26, 2014 at 12:26 PM, James Harrison Fisher
notifications@github.com wrote:

Also @jpetazzo

Docker is moving away from AUFS to use DM in the next major release

What is the next major release? Did you mean 1.0?

Reply to this email directly or view it on GitHub:
#783 (comment)

@shykes
Copy link
Contributor

shykes commented Mar 26, 2014

As for Jerome's statement, it is no longer accurate. Our initial plan was to phase out aufs completely because devmapper appeared to be the best option in 100% of cases. That turned out not to be trye, there are tradeoffs depending on your situation and so we are continuing to maintain both.

On Wed, Mar 26, 2014 at 12:26 PM, James Harrison Fisher
notifications@github.com wrote:

Also @jpetazzo

Docker is moving away from AUFS to use DM in the next major release

What is the next major release? Did you mean 1.0?

Reply to this email directly or view it on GitHub:
#783 (comment)

@jameshfisher
Copy link
Contributor

@shykes okay, what are the tradeoffs? Are the issues with devmapper, say, performance tradeoffs, or outright bugs, like this one with aufs?

If there are tradeoffs and the user is expected to make that decision, then those tradeoffs should be documented.

@joshk0
Copy link

joshk0 commented Mar 27, 2014

Frankly, I believe that I should be allowed to shoot myself in the foot with the aufs thing. This isn't a funky implementation bug, it was a conscious decision. Does anyone have a patch for aufs that can toggle this behavior?

@shykes
Copy link
Contributor

shykes commented Mar 31, 2014

@jameshfisher you are right but let's discuss this somewhere else, we are veering off-topic for this issue.

@lox
Copy link

lox commented May 16, 2014

Any progress on this? We are consistently seeing this behaviour with our mysql /var/lib/mysql directories under docker 0.11.1 and periodically in 0.9.1

@thaJeztah
Copy link
Member

@jberkus thanks for the info. I was able to reproduce it on Ubuntu 14.04 on kernel 3.19.something. Would be nice to know which version of aufs its solved in, possibly the Docker Mac team can bump the version to include the patch that's needed

@stuartnelson3
Copy link

I can confirm encountering this issue on the the docker os x beta-15 (current version). I have switched back to docker-machine in the meantime.

@kuznero
Copy link

kuznero commented Jul 20, 2016

rm /etc/puppet/* didn't work for me, instead I found that deleting individual files on the same layer works perfectly well:

find /etc/pappet -type f | xargs -L1 rm -f

Here is link to my Gist

@glasser
Copy link
Contributor

glasser commented Jul 21, 2016

We are seeing a very similar issue with Overlay.

Our current guess is that it is caused by the bug fixed in kernel 4.4.6 by this commit: https://lkml.org/lkml/2016/1/31/82

We have not yet managed to test with 4.4.6.

guillon added a commit to guillon/docker-lava-server that referenced this issue Aug 18, 2016
PostgreSQL may fail at startup with /etc/ssl/private/ssl-cert-snakeoil.key
permission denied.

This is due to a limitation of the AUFS backend which can be worked
around by recreating the /etc/ssl/private directory.

Ref to gavodachs/docker-dachs#1 for
the actual bug and moby/moby#783 for
more details on the AUFS issue.

Change-Id: I0afe01e880f4ace4a38d3751a8c621c97d97d658
guillon added a commit to guillon/docker-lava-server that referenced this issue Aug 18, 2016
Reorder layer commands in order to install ssl-cert in the first layer
in order to avoid permission denied issue when starting postgresql on
/etc/ssl/private/ssl-cert-snakeoil.key.

This is due to a limitation of the AUFS docker backend.
Ref to gavodachs/docker-dachs#1 for
an instance of the bug and to moby/moby#783
for more details on the AUFS issue.

Change-Id: I0afe01e880f4ace4a38d3751a8c621c97d97d658
karlkfi pushed a commit to dcos/dcos-website that referenced this issue Nov 2, 2016
hennevogel added a commit to hennevogel/open-build-service that referenced this issue Oct 4, 2017
On some distributions bundler would fail with writing to
the rubygem cache. This seems to be the aufs bug mentioned in
moby/moby#783

Fixes openSUSE#3940
drosofff added a commit to ARTbio/GalaxyKickStart that referenced this issue Dec 15, 2018
@LittleControl
Copy link

rm /etc/puppet/* didn't work for me, instead I found that deleting individual files on the same layer works perfectly well:

find /etc/pappet -type f | xargs -L1 rm -f

Here is link to my Gist

Thanks, it works for me!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/docs area/storage/aufs exp/beginner kind/bug Bugs are bugs. The cause may or may not be known at triage time so debugging may be needed. kind/enhancement Enhancements are not bugs or new features but can improve usability or performance.
Projects
None yet