Copying files from Docker container to host


Copying files from Docker container to host



I'm thinking of using docker to build my dependencies on a CI server, so that I don't have to install all the runtimes and libraries on the agents themselves. To achieve this I would need to copy the build artifacts that are built inside the container back into the host.



Is that possible?




12 Answers
12



In order to copy a file from a container to the host, you can use the command


docker cp <containerId>:/file/path/within/container /host/path/target



Here's an example:


[jalal@goku scratch]$ sudo docker cp goofy_roentgen:/out_read.jpg .



Here goofy_roentgen is the name I got from the following command:


[jalal@goku scratch]$ sudo docker ps
[sudo] password for jalal:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1b4ad9311e93 bamos/openface "/bin/bash" 33 minutes ago Up 33 minutes 0.0.0.0:8000->8000/tcp, 0.0.0.0:9000->9000/tcp goofy_roentgen





I've noticed that when I do this, the file isn't the same is in the running container. It's an older version of the file. Why is that?
– orodbhen
Feb 27 '15 at 19:43





Here's a handy way to get at your latest container if you're simply using docker for a temp Linux environment: docker ps -alq.
– Josh Habdas
Jun 3 '15 at 15:29


docker ps -alq





this cp command works as-is for copying directory trees as well (not just a single file).
– ecoe
Dec 30 '15 at 18:45





In newer versions of docker you can copy bidirectionally (host to container or container to host) with docker cp ...
– Freedom_Ben
Jun 18 '16 at 21:01



docker cp ...





I needed docker cp -L to copy symlinks
– Harrison Powers
Jul 26 '16 at 19:07


docker cp -L



Mount a "volume" and copy the artifacts into there:


mkdir artifacts
docker run -i -v ${PWD}/artifacts:/artifacts ubuntu:14.04 sh << COMMANDS
# ... build software here ...
cp <artifact> /artifacts
# ... copy more artifacts into `/artifacts` ...
COMMANDS



Then when the build finishes and the container is no longer running, it has already copied the artifacts from the build into the artifacts directory on the host.


artifacts



EDIT:



CAVEAT: When you do this, you may run into problems with the user id of the docker user matching the user id of the current running user. That is, the files in /artifacts will be shown as owned by the user with the UID of the user used inside the docker container. A way around this may be to use the calling user's UID:


/artifacts


docker run -i -v ${PWD}:/working_dir -w /working_dir -u $(id -u)
ubuntu:14.04 sh << COMMANDS
# Since $(id -u) owns /working_dir, you should be okay running commands here
# and having them work. Then copy stuff into /working_dir/artifacts .
COMMANDS





Actually you can use chown command to match user id and group id on the host machine.
– Dimchansky
Mar 30 '15 at 15:21



chown





How would this work using docker-compose?
– Frondor
Jun 19 at 11:31


docker-compose





@Frondor See volume config reference docs.docker.com/compose/compose-file/…
– djhaskin987
Jun 19 at 16:08





Already did, and that won't work. Once the container copied files to the volume for first time, the next time, the volume is not empty anymore and the files are not being overridden by the newer ones. The container is giving priority to the host files (the ones copied the first time you mounted the container image).
– Frondor
Jun 19 at 23:46





sounds like something that could be its own SO question @Frondor
– djhaskin987
Jun 20 at 20:10



Mount a volume, copy the artifacts, adjust owner id and group id:


mkdir artifacts
docker run -i --rm -v ${PWD}/artifacts:/mnt/artifacts centos:6 /bin/bash << COMMANDS
ls -la > /mnt/artifacts/ls.txt
echo Changing owner from $(id -u):$(id -g) to $(id -u):$(id -u)
chown -R $(id -u):$(id -u) /mnt/artifacts
COMMANDS



tldr;


$ docker run --rm -iv${PWD}:/host-volume my-image sh -s <<EOF
chown $(id -u):$(id -g) my-artifact.tar.xz
cp -a my-artifact.tar.xz /host-volume
EOF



Longer...



docker run with a host volume, chown the artifact, cp the artifact to the host volume:


docker run


chown


cp


$ docker build -t my-image - <<EOF
> FROM busybox
> WORKDIR /workdir
> RUN touch foo.txt bar.txt qux.txt
> EOF
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM busybox
---> 00f017a8c2a6
Step 2/3 : WORKDIR /workdir
---> Using cache
---> 36151d97f2c9
Step 3/3 : RUN touch foo.txt bar.txt qux.txt
---> Running in a657ed4f5cab
---> 4dd197569e44
Removing intermediate container a657ed4f5cab
Successfully built 4dd197569e44

$ docker run --rm -iv${PWD}:/host-volume my-image sh -s <<EOF
chown -v $(id -u):$(id -g) *.txt
cp -va *.txt /host-volume
EOF
changed ownership of '/host-volume/bar.txt' to 10335:11111
changed ownership of '/host-volume/qux.txt' to 10335:11111
changed ownership of '/host-volume/foo.txt' to 10335:11111
'bar.txt' -> '/host-volume/bar.txt'
'foo.txt' -> '/host-volume/foo.txt'
'qux.txt' -> '/host-volume/qux.txt'

$ ls -n
total 0
-rw-r--r-- 1 10335 11111 0 May 7 18:22 bar.txt
-rw-r--r-- 1 10335 11111 0 May 7 18:22 foo.txt
-rw-r--r-- 1 10335 11111 0 May 7 18:22 qux.txt



This trick works because the chown invocation within the heredoc the takes $(id -u):$(id -g) values from outside the running container; i.e., the docker host.


chown


$(id -u):$(id -g)



The benefits over docker cp are:


docker cp


docker run --name


docker container rm



I am posting this for anyone that is using Docker for Mac.
This is what worked for me:


$ mkdir mybackup # local directory on Mac

$ docker run --rm --volumes-from <containerid>
-v `pwd`/mybackup:/backup
busybox
cp /data/mydata.txt /backup



Note that when I mount using -v that backup directory is automatically created.


-v


backup



I hope this is useful to someone someday. :)





If you use docker-compose, volumes-from is deprecated in version 3 and after.
– mulg0r
Apr 26 at 9:11



If you dont have a running container, just an image, and assuming you want to copy just a text file, you could do something like this:


docker run the-image cat path/to/container/file.txt > path/to/host/file.txt



As a more general solution, there's a CloudBees plugin for Jenkins to build inside a Docker container. You can select an image to use from a Docker registry or define a Dockerfile to build and use.



It'll mount the workspace into the container as a volume (with appropriate user), set it as your working directory, do whatever commands you request (inside the container).
You can also use the docker-workflow plugin (if you prefer code over UI) to do this, with the image.inside() {} command.



Basically all of this, baked into your CI/CD server and then some.



Most of the answer do not indicate that the container must run before docker cp will work:


docker cp


docker build -t IMAGE_TAG .
docker run -d IMAGE_TAG
CONTAINER_ID=$(docker ps -alq)
# If you do not know the exact file name, you'll need to run "ls"
# FILE=$(docker exec CONTAINER_ID sh -c "ls /path/*.zip")
docker cp $CONTAINER_ID:/path/to/file .
docker stop $CONTAINER_ID



If you just want to pull a file from an image (instead of a running container) you can do this:



docker run --rm <image> cat <source> > <local_dest>


docker run --rm <image> cat <source> > <local_dest>



This will bring up the container, write the new file, then remove the container. One drawback, however, is that the file permissions and modified date will not be preserved.


docker run -dit --rm IMAGE
docker cp CONTAINER:SRC_PATH DEST_PATH



https://docs.docker.com/engine/reference/commandline/run/
https://docs.docker.com/engine/reference/commandline/cp/



Create a path where you want to copy the file then use :-


docker run -d -v hostpath:dockerimag



Create a data directory on the host system (outside the container) and mount this to a directory visible from inside the container. This places the files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files


docker run -d -v /path/to/Local_host_dir:/path/to/docker_dir docker_image:tag





That lets you inject a directory and it's contents from the host into the container. It doesn't let you copy files from the container back out to the host.
– BMitch
May 16 '17 at 16:31





It does if the host folder has very wide permissions?
– giorgiosironi
Dec 19 '17 at 14:42






By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Comments

Popular posts from this blog

paramiko-expect timeout is happening after executing the command

how to run turtle graphics in Colaboratory

Export result set on Dbeaver to CSV