Skip to content

No such image: gcr.io/google_containers/redis #6888

@BlueShells

Description

@BlueShells
Contributor

I run followed "https://github.com/GoogleCloudPlatform/kubernetes/blob/master/examples/guestbook/README.md"

but after I run "kubectl create -f examples/guestbook/redis-master-controller.json"

redis-master-controller-e0zho redis-master gcr.io/google_containers/redis fed-node/ app=redis,name=redis-master Pending 6 minutes

it is always pending ,and I checked the log on the node ,here are the errors,what did I miss when I build the example?thanks

Apr 16 10:35:41 fedora docker: time="2015-04-16T10:35:41+08:00" level="info" msg="+job image_inspect(gcr.io/google_containers/redis)"
Apr 16 10:35:41 fedora docker: No such image: gcr.io/google_containers/redis
Apr 16 10:35:41 fedora docker: time="2015-04-16T10:35:41+08:00" level="info" msg="-job image_inspect(gcr.io/google_containers/redis) = ERR (1)"
Apr 16 10:35:41 fedora docker: time="2015-04-16T10:35:41+08:00" level="error" msg="Handler for GET /images/{name:.*}/json returned error: No such image: gcr.io/google_containers/redis"
Apr 16 10:35:41 fedora docker: time="2015-04-16T10:35:41+08:00" level="error" msg="HTTP Error: statusCode=404 No such image: gcr.io/google_containers/redis"
Apr 16 10:35:41 fedora kubelet: E0416 10:35:41.606815 7380 kubelet.go:1990] Cannot get host IP: Host IP unknown; known addresses: []
Apr 16 10:35:41 fedora kubelet: E0416 10:35:41.606867 7380 pod_workers.go:103] Error syncing pod cc1f1c78-e3e0-11e4-b8f6-fa163ebba90c, skipping: image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request. details: (API error (500): Invalid registry endpoint https://gcr.io/v1/: Get https://gcr.io/v1/_ping: dial tcp 74.125.203.82:443: i/o timeout. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add --insecure-registry gcr.io to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/gcr.io/ca.crt
Apr 16 10:35:41 fedora kubelet: )
Apr 16 10:35:41 fedora kubelet: I0416 10:35:41.606966 7380 event.go:200] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"redis-master-controller-e0zho", UID:"cc1f1c78-e3e0-11e4-b8f6-fa163ebba90c", APIVersion:"v1beta3", ResourceVersion:"13546", FieldPath:"implicitly required container POD"}): reason: 'failed' Failed to pull image "gcr.io/google_containers/pause:0.8.0": image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request. details: (API error (500): Invalid registry endpoint https://gcr.io/v1/: Get https://gcr.io/v1/_ping: dial tcp 74.125.203.82:443: i/o timeout. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add --insecure-registry gcr.io to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/gcr.io/ca.crt
Apr 16 10:35:41 fedora kubelet: )

Activity

BlueShells

BlueShells commented on Apr 16, 2015

@BlueShells
ContributorAuthor

it seems that I can't ping gcr.io,How can solve this problem?

erictune

erictune commented on Apr 16, 2015

@erictune
Contributor

When I run ping gcr.io on my desktop (non-kubernetes-node) machine, I get a response. If you don't then you would need to figure out why -- I doubt I could help with that. If you can ping it from your desktop, but not from the kubernetes node, then I may be able to help debug.

self-assigned this
on Apr 16, 2015
ddysher

ddysher commented on Apr 17, 2015

@ddysher
Contributor

@BlueShells is blocked by GFW, you need to find a way out :)

BlueShells

BlueShells commented on Apr 17, 2015

@BlueShells
ContributorAuthor

@ddysher @erictune thanks guys,I just install kuber on the other servers out of GFW,it works now ,so if you have method to solve the GFW problems ,better,if not ,that's all right ,thanks all the same

ddysher

ddysher commented on Apr 17, 2015

@ddysher
Contributor

Thanks for confirming this. I think we can close the issue now.

vmarmol

vmarmol commented on Apr 21, 2015

@vmarmol
Contributor

Would it be useful to maintain the images in the Docker Hub as well for cases like these? I imagine it could be possible that should get blocked too and we'd be out of luck :)

tobegit3hub

tobegit3hub commented on May 29, 2015

@tobegit3hub

Moving to docker hub is much helpful. Please do it 😭

ReSearchITEng

ReSearchITEng commented on Jul 13, 2015

@ReSearchITEng

please move to docker hub...
Especially in intranets, people get:
2700 event.go:203] Event(api.ObjectReference{Kind:"Pod", Namespace:"demo11", Name:"webserver-controller-ychel", UID:"d310777e-2958-11e5-be96-4437e6a56f8a", APIVersion:"v1beta3", ResourceVersion:"274474", FieldPath:""}): reason: 'failedSync' Error syncing pod, skipping: image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request. details: (Error pulling image (0.8.0) from gcr.io/google_containers/pause, Get https://storage.googleapis.com/artifacts.google-containers.appspot.com/containers/images/2c40b0526b6358710fd09e7b8c022429268cc61703b4777e528ac9d469a07ca1/ancestry: Forbidden)

Any work-around (besides moving the box to another network)?

12 remaining items

laugimethods

laugimethods commented on Jun 15, 2016

@laugimethods

@dims: Yes, very same CLI... BTW, I'm running the beta version of native Docker on Mac.

  FirstSeen LastSeen    Count   From            SubobjectPath           Type        Reason      Message
  --------- --------    -----   ----            -------------           --------    ------      -------
  18m       18m     1   {default-scheduler }                    Normal      Scheduled   Successfully assigned spark-master-controller-vpu57 to 192.168.64.2
  18m       3m      6   {kubelet 192.168.64.2}  spec.containers{spark-master}   Normal      Pulling     pulling image "gcr.io/google_containers/spark:1.5.2_v1"
  16m       1m      6   {kubelet 192.168.64.2}  spec.containers{spark-master}   Warning     Failed      Failed to pull image "gcr.io/google_containers/spark:1.5.2_v1": image pull failed for gcr.io/google_containers/spark:1.5.2_v1, this may be because there are no credentials on this request.  details: (API error (500): unable to ping registry endpoint https://gcr.io/v0/
v2 ping attempt failed with error: Get https://gcr.io/v2/: dial tcp: lookup gcr.io: Temporary failure in name resolution
 v1 ping attempt failed with error: Get https://gcr.io/v1/_ping: dial tcp: lookup gcr.io: Temporary failure in name resolution
)
  16m   1m  6   {kubelet 192.168.64.2}      Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "spark-master" with ErrImagePull: "image pull failed for gcr.io/google_containers/spark:1.5.2_v1, this may be because there are no credentials on this request.  details: (API error (500): unable to ping registry endpoint https://gcr.io/v0/\nv2 ping attempt failed with error: Get https://gcr.io/v2/: dial tcp: lookup gcr.io: Temporary failure in name resolution\n v1 ping attempt failed with error: Get https://gcr.io/v1/_ping: dial tcp: lookup gcr.io: Temporary failure in name resolution\n)"

  16m   13s 25  {kubelet 192.168.64.2}  spec.containers{spark-master}   Normal  BackOff     Back-off pulling image "gcr.io/google_containers/spark:1.5.2_v1"
  16m   13s 25  {kubelet 192.168.64.2}                  Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "spark-master" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/spark:1.5.2_v1\""
bash-3.2$ docker images
REPOSITORY                                                TAG                 IMAGE ID            CREATED             SIZE
...
gettyimages/spark                                         1.6.0-hadoop-2.6    3a3dcc25744a        3 months ago        709.1 MB
gcr.io/google_containers/spark                            1.5.2_v1            22712970844d        3 months ago        989.9 MB
gettyimages/spark                                         1.5.2-hadoop-2.6    6bc6862a4701        6 months ago        655.8 MB
gcr.io/google_containers/spark-master                     latest              d594b00c063a        6 months ago        989.5 MB
laugimethods

laugimethods commented on Jun 15, 2016

@laugimethods

FYI, I tried another example (Java EE Application using WildFly and MySQL), which is based on images hosted on Docker Hub, with the same outcome:

  FirstSeen LastSeen    Count   From            SubobjectPath       Type        Reason      Message
  --------- --------    -----   ----            -------------       --------    ------      -------
  1m        1m      1   {default-scheduler }                Normal      Scheduled   Successfully assigned mysql-pod to 192.168.64.2
  1m        54s     2   {kubelet 192.168.64.2}  spec.containers{mysql}  Normal      Pulling     pulling image "mysql:latest"
  1m        19s     2   {kubelet 192.168.64.2}  spec.containers{mysql}  Warning     Failed      Failed to pull image "mysql:latest": Error while pulling image: Get https://index.docker.io/v1/repositories/library/mysql/images: dial tcp: lookup index.docker.io: Temporary failure in name resolution
  1m        19s     2   {kubelet 192.168.64.2}              Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "mysql" with ErrImagePull: "Error while pulling image: Get https://index.docker.io/v1/repositories/library/mysql/images: dial tcp: lookup index.docker.io: Temporary failure in name resolution"

  1m    8s  2   {kubelet 192.168.64.2}  spec.containers{mysql}  Normal  BackOff     Back-off pulling image "mysql:latest"
  1m    8s  2   {kubelet 192.168.64.2}              Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "mysql" with ImagePullBackOff: "Back-off pulling image \"mysql:latest\""
dims

dims commented on Jun 15, 2016

@dims
Member

@laugimethods : dumb question again - does "curl https://index.docker.io/v1/repositories/library/mysql/images" work? (trying to tease out if there's a bug in the code)

laugimethods

laugimethods commented on Jun 15, 2016

@laugimethods

@dims:

bash-3.2$ curl https://index.docker.io/v1/repositories/library/mysql/images
[{"checksum": "", "id": "326c2d52b176ec389c8d5fcc9d44310b8a214b661f39f6b05a07c715f6936318"}, {"checksum": "", "id": "4543f37883b5b5158b6b11c5420faec6fa5f0913c76b5991f5eb30ebc1c18ffb"}, .... 
laugimethods

laugimethods commented on Jun 15, 2016

@laugimethods

(Take note that I'm definitively new to K8s)
I'm using the CLI provided by Kube-Solo.

bash-3.2$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.4", GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.4", GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f", GitTreeState:"clean"}

I just also tried to deploy an image through the Web Interface... Same issue...

deployacontainerizedapp
myownsql

dims

dims commented on Jun 15, 2016

@dims
Member

@laugimethods next logical thing to do is run kubelet with more verbose logs i think.

laugimethods

laugimethods commented on Jun 16, 2016

@laugimethods

@dims Unfortunately Kube-Solo did crash while trying to get more verbose logs and I was not able to recover it (even after a reinstallation)... I'll try to use another tool, which hopefully will solve the Docker image fetching issue. Thanks for your help.

NickCao

NickCao commented on Aug 16, 2016

@NickCao

gcr.io is unaccessable in China, please fix that use mirrrors

benbonnet

benbonnet commented on Oct 15, 2016

@benbonnet

Images did got pushed correctly to google container registry. then :

docker login -e mygcloud@email.tld -u oauth2accesstoken -p "$(gcloud auth print-access-token)" https://eu.gcr.io

plus (in case)

gcloud auth login
cat ~/.docker/config.json # listing correctly all google registry endpoints

then

docker pull eu.gcr.io/my-project-id/my-image # it works superfine

Deleting the docker images, expecting the following deployment to do the pull job :

kind: Deployment
apiVersion: "extensions/v1beta1"
metadata:
  name: "my-image-app"
spec:
  replicas: 1
  template:
    metadata:
      labels:
        service: "my-image-app"
    spec:
      containers:
      - name: "my-image-app"
        image: "eu.gcr.io/my-project-id/my-image"
        ports:
        - containerPort: some_port

and running kubectl create -f the_file_containing_what-s_above.yaml followed by kubectl get pods always returned some errImagePull 403 unhautorized etc… for the concerned pod.

Restarted both docker-machine and minikube, in case, re-ran the deployment. Same.

Thought I had forgot the tag at the end of the image endpoint (appending :latest), retried. Same.

curl https://index.docker.io/v1/repositories/library/mysql/images return a large array of data

Running gcloud config list:

[compute]
region = europe-west1
zone = europe-west1-c
[core]
account = mygcloud@email.tld
disable_usage_reporting = False
project = my-project-id

I don't feel wrong to think the issue is not related to some cross-region problems.

Running all those things on a single OS X laptop. People advising above to turn to dockerhub, is it the definitive answer ? Isn't there something to do to instruct kubernetes/and-or/kubectl/and-or/minikube in order for them to look for the correct credentials ? How to know what auth-infos the concerned one use to pull images ?

benbonnet

benbonnet commented on Oct 15, 2016

@benbonnet

In the end, this SO answer solved the issue : http://stackoverflow.com/a/36286707/102133
But that's the only place it could clearly be explained

chinglinwen

chinglinwen commented on Feb 22, 2018

@chinglinwen

Not sure it's the place to comment about gcr.io issue, But it's really headache since many place use gcr.io.

Say install a dashboard involves many image that need gcr, or I try to run a kube-test
it involves two images:

        image: gcr.io/heptio-images/kube-conformance:v1.9
        image: gcr.io/heptio-images/sonobuoy:v0.9.0

replaced with zeppp/sonobuoy:v0.9.0, zeppp/kube-conformance:v1.8 ( doesn't have v1.9 tag )

even I replaced all of them, there's still one inside the runing, which need a gcr image, cause the running failed.

Failed: Failed to pull image "gcr.io/kubernetes-e2e-test-images/mounttest-user-amd64:1.0"

This one, really don't know how to replace it. ( and I don't know which node the pod will run ).

Though, any method, need manual interrupt is consider inconvenient ( manual pull, tag, push etc ).

Not sure, say docker.io one day will block or not.

If there is a way that one can easily workaround would be much helpful.

I tried docker cache through, seems does not work for private registry say gcr.io.

buzai

buzai commented on May 15, 2018

@buzai

@chinglinwen hi, fuck gfw error now is trouble me , how can i set a ss proxy for gcr.io?

chinglinwen

chinglinwen commented on May 16, 2018

@chinglinwen

@buzai You can use s2http (https://github.com/chinglinwen/s2http>, to turn a ss proxy to https proxy.

Then export https_proxy .

And maybe extra no_proxy .

buzai

buzai commented on May 16, 2018

@buzai

@chinglinwen
thank u repley me .

 Type     Reason                 Age                 From                Message
  ----     ------                 ----                ----                -------
  Normal   SuccessfulMountVolume  44m                 kubelet, k8s-node3  MountVolume.SetUp succeeded for volume "xtables-lock"
  Normal   SuccessfulMountVolume  44m                 kubelet, k8s-node3  MountVolume.SetUp succeeded for volume "lib-modules"
  Normal   SuccessfulMountVolume  44m                 kubelet, k8s-node3  MountVolume.SetUp succeeded for volume "kube-proxy-token-zvktx"
  Normal   SuccessfulMountVolume  44m                 kubelet, k8s-node3  MountVolume.SetUp succeeded for volume "kube-proxy"
  Normal   BackOff                42m (x4 over 43m)   kubelet, k8s-node3  Back-off pulling image "k8s.gcr.io/kube-proxy-amd64:v1.10.2"
  Normal   Pulling                41m (x4 over 44m)   kubelet, k8s-node3  pulling image "k8s.gcr.io/kube-proxy-amd64:v1.10.2"
  Warning  Failed                 41m (x4 over 43m)   kubelet, k8s-node3  Failed to pull image "k8s.gcr.io/kube-proxy-amd64:v1.10.2": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Failed                 41m (x4 over 43m)   kubelet, k8s-node3  Error: ErrImagePull
  Warning  Failed                 3m (x158 over 43m)  kubelet, k8s-node3  Error: ImagePullBackOff

i had set a proxy for docker, and i can use docker pull some images, but k8s has a error like this.
i think k8s will send a request to gcr.io for ensure the image is right?

chinglinwen

chinglinwen commented on May 16, 2018

@chinglinwen

Say kube-proxy-amd64, you can setup https_proxy, run docker pull k8s.gcr.io/kube-proxy-amd64:v1.10.2 ( on every node ).

If pod status doesn't change you can delete that pod, it will re-create it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

Labels

sig/cluster-lifecycleCategorizes an issue or PR as relevant to SIG Cluster Lifecycle.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

    Development

    No branches or pull requests

      Participants

      @benbonnet@dims@vmarmol@jean0313@ddysher

      Issue actions

        No such image: gcr.io/google_containers/redis · Issue #6888 · kubernetes/kubernetes