Closed
Description
There appears to be a bug in the make_predict_function for the tensorflow backend. The following error message appears for me when trying to call model.predict(...)
self._make_predict_function()
File "/usr/local/lib/python3.4/dist-packages/keras/engine/training.py", line 679, in _make_predict_function
**self._function_kwargs)
File "/usr/local/lib/python3.4/dist-packages/keras/backend/tensorflow_backend.py", line 615, in function
return Function(inputs, outputs, updates=updates)
File "/usr/local/lib/python3.4/dist-packages/keras/backend/tensorflow_backend.py", line 589, in __init__
with tf.control_dependencies(self.outputs):
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 3192, in control_dependencies
return get_default_graph().control_dependencies(control_inputs)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 2993, in control_dependencies
c = self.as_graph_element(c)
File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 2291, in as_graph_element
raise ValueError("Tensor %s is not an element of this graph." % obj)
ValueError: Tensor Tensor("Sigmoid_2:0", shape=(?, 17), dtype=float32) is not an element of this graph.
This does not happen when using the theano backend.
Notes: The model is loaded from json, and is defined as follows:
seq1=Input(dtype='int32',shape=(400,),name='input_text')
seq2=Input(dtype='int32',shape=(20,),name='input_titles')
embeddeding=Embedding(max_features,embedding_dims,dropout=0.3)
encoding_1=embeddeding(seq1)
encoding_2=embeddeding(seq2)
filter_lengths = [1,3,6]
def max_1d(X):
return K.max(X, axis=1)
convs1=[]
convs2=[]
for fl in filter_lengths:
conv1=Convolution1D(nb_filter=nb_filter,
filter_length=fl,
border_mode='valid',
activation='relu',
subsample_length=1)(encoding_1)
conv1=Lambda(max_1d, output_shape=(nb_filter,))(conv1)
convs1.append(conv1)
conv2=Convolution1D(nb_filter=nb_filter,
filter_length=fl,
border_mode='valid',
activation='relu',
subsample_length=1)(encoding_2)
conv2=Lambda(max_1d, output_shape=(nb_filter,))(conv2)
convs2.append(conv2)
m=merge([*convs1,*convs2],mode='concat')
m=Highway(activation='relu')(m)
m=Highway(activation='relu')(m)
m=Dropout(0.5)(m)
hovedkategori_loss=Dense(labsHovedKat.shape[1],activation='sigmoid',name='hovedkategori')(m)
m1=merge([hovedkategori_loss,m],mode='concat')
underkategori_loss=Dense(labsUnderKat.shape[1],activation='sigmoid',name='underkategori')(m1)
model=Model(input=[seq1,seq2],output=[hovedkategori_loss,underkategori_loss])
model.compile(optimizer='adam',loss='binary_crossentropy',metrics={'hovedkategori':'accuracy','underkategori':'accuracy'})
- Check that you are up-to-date with the master branch of Keras. You can update with:
pip install git+git://github.com/fchollet/keras.git --upgrade --no-depsIf running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:
pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps
Activity
Froskekongen commentedon Apr 19, 2016
I would appreciate any comments on this issue, as I want to deploy the model asap. And I need to know if I can use it or code something else.
fchollet commentedon Apr 19, 2016
Do you have a code snippet to reproduce this issue? I can guarantee you that
predict
does in fact work, including with TensorFlow.Froskekongen commentedon Apr 20, 2016
It appears this bug had nothing to do with either keras or tensorflow, but rather how async events were handled by the webserver I am using.
jstypka commentedon May 9, 2016
@Froskekongen could you describe how you fixed this in more detail? I'm having an exactly the same error however in a different program.
It seems to work when I do it manually in a REPL, however when I deploy it as a webservice it breaks.
pxlong commentedon May 13, 2016
I also have the same error under the tensorflow backend, however, it works using the theano backend.
@jstypka @Froskekongen Have you found a solution to fix it?
jstypka commentedon May 13, 2016
@pxlong it also works on Theano for me, I think it's exactly the same problem. I didn't manage to solve it though, was hoping for some hints from @Froskekongen
rkempter commentedon May 27, 2016
same here, same issue! Works fine in REPL, issues running it behind a webservice.
rkempter commentedon May 27, 2016
Running the webservice with gunicorn in sync mode solved the issue.
gladuo commentedon Aug 2, 2016
Hey everybody, I'm still not sure what's wrong with this combination.
But I use meinheld instead and it workes even better than gevent.
Hope this help.
AbhishekAshokDubey commentedon Aug 16, 2016
Same problem (model.predtict breaking) for me too, but it worked when i switched to theano backend from tensflow.
Nr90 commentedon Aug 26, 2016
Same problem here.
Seems to work fine normally. When deployed as a webservice using Flask, get this error.
Nr90 commentedon Aug 27, 2016
Works when using Theano as backend, doesn't work with tensorflow.
avital commentedon Oct 19, 2016
I had this problem when doing inference in a different thread than where I loaded my model. Here's how I fixed the problem:
Right after loading or constructing your model, save the TensorFlow graph:
In the other thread (or perhaps in an asynchronous event handler), do:
I learned about this from https://www.tensorflow.org/versions/r0.11/api_docs/python/framework.html#get_default_graph
Walid-Ahmed commentedon Nov 4, 2016
Thanks a lot.
it worked for me.
142 remaining items
keshavatgithub commentedon Sep 22, 2019
could you give the code snippet
keshavatgithub commentedon Sep 25, 2019
How to use this ?
gustavz commentedon Oct 10, 2019
For me the is only solvable if i load the model inside the flask
@app.route POST method
.Which means I reload the model on every request which is very inefficient.
Loading the model as a
global
either in the beginning of the flask app script or asglobal
in themain()
, does not work.Any ideas on how to solve this?
predict 병렬처리 실패, pool을 사용해 predict 하려하였으나 타 thread 상황에서 학습모델의 graph 가…
Chancetc commentedon Dec 2, 2019
thanks a million!!
Aparnamaurya commentedon Apr 2, 2020
If multithreaded performance is not a necessity, you can also run tensorflow on a single thread
(as in my case , I came across the same issue and none of the mentioned method worked, it would be great if someone could help in figuring out but just in case)
To run tensorflow on a single thread-
isaaclok commentedon May 22, 2020
I solved this upgrading to tensorflow2 and implementing a singleton pattern in my model, like so:
Should work for both keras alone and tensorflow2.
Make the model work in a multi-thread environment.
Content Rewriter Demo (#208)
Minqi824 commentedon Aug 7, 2020
Well. Currently my tf is 2.2.0 and keras is 2.3.1. I add "_make_predict_function()" after loading the model.
However, I get this error "RuntimeError: Attempting to capture an EagerTensor without building a function." and the command "tf.compat.v1.disable_eager_execution()" seems not work and generate new error.
Can anybody help me solve this problem? Thanks a lot!
neilmario70 commentedon Oct 25, 2020
I am not able to follow avitals instructions. Could someone share an example code of using one of the pretrained models like Resnet50 or VGGnet? My flask app only works on the development server and stop running as soon as I use nginx with uwsgi on production with multi threading. I am currently using this image https://hub.docker.com/r/tiangolo/uwsgi-nginx/ in production and here is my code for making prediction.
I am importing the predict_class function to make predictions on uploaded files from the main.py file
model.make_predict_function()
SciSharp/TensorFlow.NET#958