Description
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.): No
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): command not found configmap kubernetes
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG
Kubernetes version (use kubectl version
):
Client Version: v1.6.1 GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e"
Server Version: v1.6.0 GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37
Environment:
- Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release): host is ubuntu 16.04
- Kernel (e.g.
uname -a
): host is Linux dev1 4.4.0-72-generic Improved commit hook #93-Ubuntu SMP Fri Mar 31 14:07:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux - Install tools:
- Others: Running from within minikube version: v0.18.0
What happened:
When I try to create a deployment with a configmap file that is mounted into the same directory as the entry point , the container fails to start with the following error
"Error response from daemon: Container command '/app/app.sh' not found or does not exist."
The pod spec includes a configmap which is mounted into the same directory as the entrypoint
Seems the entry point script is lost after mounting the volume for the config map in the same directory
If I mount the configmap file into a sub directory all works as expected
What you expected to happen:
I expected the config map file to be created in the directory without effecting the existing directory content which in this case contains an entrypoint script
How to reproduce it (as minimally and precisely as possible):
Docker file - note the entry point
FROM busybox:latest
RUN mkdir /app
COPY app.sh /app
ENTRYPOINT ["/app/app.sh"]
Entry point script - infinite loop
#!/bin/sh
seq=1
while [[ true ]]; do
echo "${seq} $(date) working"
sleep .5s
let seq=$((seq + 1))
done
k8s configmap and deployment file
apiVersion: v1
kind: ConfigMap
metadata:
labels:
product: k8s-demo
name: demo
data:
settings.json: |
{
"store": {
"type": "InMemory",
}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
product: k8s-demo
name: demo
spec:
replicas: 1
template:
metadata:
labels:
app: demo
product: k8s-demo
spec:
containers:
- name: demo
image: pmcgrath/shellloop:1
imagePullPolicy: Always
volumeMounts:
- name: demo-config
mountPath: /app
volumes:
- name: demo-config
configMap:
name: demo
items:
- key: settings.json
path: settings.json
When I run the kubectl apply -d k8s.yaml and look at the pod I can see the following error
rpc error: code = 2 desc = failed to start container "f9e0112c80ebba568d4b508f99ffb053bf1ae5a4f095ce7f45bff5f38900b617": Error response from daemon: Container command '/app/app.sh' not found or does not exist.
Anything else we need to know:
If I change the mountPath for the volume to any other directory it works as expected
I did test this directly with docker on my host (17.03.0-ce) and it worked as expected
touch settings.json
docker container run -ti -v $(pwd)/settings.json:/app/settings.json pmcgrath/shellloop:1
Activity
zhouhaibing089 commentedon Apr 25, 2017
@pmcgrath
Checkout here.
it seems I understand your issue. I have the same question before, but there is an answer actually in your situation.
To brief your case, you have a configmap(
settings.json: blahblah
), and want to mount into a folder/app
. Then below is what you need to know:mountPath
, so in your case, the/app
folder will only containsettings.json
.mountPath: /app/settings.json
, only that way, the original content in/app
folders won't be affected.This is something that you will get eventually:
pmcgrath commentedon Apr 26, 2017
@zhouhaibing089
Thanks for the reply, works based on your suggestion, I appreciate the explanation
I am happy to close this issue
Pat
agilgur5 commentedon Nov 2, 2017
For reference, the original mention of this solution seems to be here: #23748 (comment)
It looks like the documentation for this is missing, which makes this case fairly confusing/misleading, and the projection docs seem to make it more misleading as well -- not sure if it's missing because auto updates apparently don't work as per that issue
emwalker commentedon Dec 28, 2017
The requirement for the file name to be specified both under
mountPath
andsubPath
is counterintuitive.vishaltelangre commentedon Dec 29, 2017
The solution provided by @zhouhaibing089 works but the content of the mounted file at
subPath
doesn't update if we edit it in the resembled ConfigMap.leebenson commentedon Jul 31, 2018
IMO, this isn't really solved. There should be an option to append each key rather than overwrite.
Something like:
So that...
... keeps the existing
default.conf
and any other artefacts of the Docker image, but augments withexample*.conf
Repeating the same info in subPath is just icky.
mehemken commentedon Sep 13, 2018
+1 on @leebenson 's
append: true
option.val1715 commentedon Mar 12, 2019
According to @leebenson answer: can anyone explain where from is coming
append: true
option ??it does not work for me, i got:
Also, it is not present in api docs for volume mount :
https://k8smeetup.github.io/docs/api-reference/v1.9/#volumemount-v1-core
Zvikan commentedon Mar 18, 2019
it doesn't exist, it's a suggestion. a good one for the use case.
fulopbede commentedon Apr 24, 2019
I tried to use this method, but I got a Read-only file system error, when I applied the statefulset. Does anyone know how to fix that?
(I'm overwriting an existing file, that contains settings for elasticsearch, actual error message -->
/usr/share/elasticsearch/bin/run.sh: line 28: ./config/elasticsearch.yml: Read-only file system
)