New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensorflow backend - bug in model._make_predict_function(...) #2397
Comments
I would appreciate any comments on this issue, as I want to deploy the model asap. And I need to know if I can use it or code something else. |
Do you have a code snippet to reproduce this issue? I can guarantee you that |
It appears this bug had nothing to do with either keras or tensorflow, but rather how async events were handled by the webserver I am using. |
@Froskekongen could you describe how you fixed this in more detail? I'm having an exactly the same error however in a different program. It seems to work when I do it manually in a REPL, however when I deploy it as a webservice it breaks. |
I also have the same error under the tensorflow backend, however, it works using the theano backend. |
@pxlong it also works on Theano for me, I think it's exactly the same problem. I didn't manage to solve it though, was hoping for some hints from @Froskekongen |
same here, same issue! Works fine in REPL, issues running it behind a webservice. |
Running the webservice with gunicorn in sync mode solved the issue. |
Hey everybody, I'm still not sure what's wrong with this combination. |
Same problem (model.predtict breaking) for me too, but it worked when i switched to theano backend from tensflow. |
Same problem here. |
Works when using Theano as backend, doesn't work with tensorflow. |
I had this problem when doing inference in a different thread than where I loaded my model. Here's how I fixed the problem: Right after loading or constructing your model, save the TensorFlow graph: graph = tf.get_default_graph() In the other thread (or perhaps in an asynchronous event handler), do: global graph
with graph.as_default():
(... do inference here ...) I learned about this from https://www.tensorflow.org/versions/r0.11/api_docs/python/framework.html#get_default_graph |
Thanks a lot. |
Thanks so much! @avital |
Thanks a lot! @avital It worked. |
Thank you @avital, that did the trick! This issue really should not be closed. This should be fixed by the Keras library. |
You are savior! Thanks a lot. @avital |
Hi all, I followed @avital codes but got 'AttributeError: exit' after the with statement, does anyone know how to deal with this? Thanks!! |
Thanks a million, it works!!!!@avital |
@avital |
@avital |
Worked like a charm!! :) @avital |
the same problem and i solve it. thanks @avital |
Amazing solution!! |
avital's solution works! keras with tensorflow backend. Details: self.model = load_model(model_path)
self.model._make_predict_function()
self.graph = tf.get_default_graph() another thread: with self.graph.as_default():
labels = self.model.predict(data) |
@shaoeChen but gunicorn also does the same thing right? it loads the model for each process.Anyways good for you. |
@SiddhardhaSaran hi dear. |
@jglwiz |
Added change to fix "Tensor is not an element of this graph", find reference here keras-team/keras#2397 (comment)
I faced the same issue recently when deploying the model as a webservice using django. I ended up creating a singleton class that would have the model and the tf.graph() i.e it would be instatiated only once. it solved the problem. |
hi @shaoeChen , you might want to try my approach: #2397 (comment) It both save session and graph |
Not sure if it's relevant to the original question, but maybe it'll be useful to others: Based on this answer, the following resolved tf's multithreading compatibility for me: # on thread 1
session = tf.Session(graph=tf.Graph())
with session.graph.as_default():
k.backend.set_session(session)
model = k.models.load_model(filepath)
# on thread 2
with session.graph.as_default():
k.backend.set_session(session)
model.predict(x, **kwargs) The novelty here is keeping both the |
@eliadl @emesha92 how about multi-model prediction and multiprocessing under the one main process? would you mind giving me some advice. |
@Ai-is-light by "multiprocessing under one main process" what exactly do you mean? |
could you give the code snippet |
How to use this ? |
For me the is only solvable if i load the model inside the flask Any ideas on how to solve this? |
… 제대로 인식되지 않는 듯 하다. keras-team/keras#2397 해결책이 있으나 keras 학습모델에 대한 이해부족으로 적용 실패
thanks a million!! |
If multithreaded performance is not a necessity, you can also run tensorflow on a single thread (as in my case , I came across the same issue and none of the mentioned method worked, it would be great if someone could help in figuring out but just in case) To run tensorflow on a single thread- global session
session_conf = tf.ConfigProto(
intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1)
session = tf.Session(config=session_conf)
# for inference
global session
with session.graph.as_default():
(...do infernece here ... ) |
I solved this upgrading to tensorflow2 and implementing a singleton pattern in my model, like so:
Should work for both keras alone and tensorflow2. |
- This is mainly done by using `graph.as_default()`. - Detailed discussions can be found at: keras-team/keras#2397 (comment) - Without explictly using `as_default`, running this will fail in Django (unless you specify -nothreading)
* Write URIs in multi pack serialization instead of paths. * Add rewriter example. * add an example to read from local pack. * some ontology * Update example, model should be initialize separately. * Add a fake model. * Resolve merging conflicts * Fix some stuff. * Clean the code. (#5) * Clean the code. * Clean the code. * stuff. * Fix import issues. * Make model path configurable. * Write data to disk for every new processing. * Fixing mypy and pylint issues in content_rewriting example. * Resolve flake8 issues. * Make pylint happy. * Removing tf.flags to avoid polluting command line args. * Make the model work in a multi-thread environment. - This is mainly done by using `graph.as_default()`. - Detailed discussions can be found at: keras-team/keras#2397 (comment) - Without explictly using `as_default`, running this will fail in Django (unless you specify -nothreading) * Update with more table examples. * Update with more table examples. * Remove unused args. * Fix pylint problem. Co-authored-by: Shuai Lin <shuailin97@gmail.com>
Well. Currently my tf is 2.2.0 and keras is 2.3.1. I add "_make_predict_function()" after loading the model. However, I get this error "RuntimeError: Attempting to capture an EagerTensor without building a function." and the command "tf.compat.v1.disable_eager_execution()" seems not work and generate new error. Can anybody help me solve this problem? Thanks a lot! |
I am not able to follow avitals instructions. Could someone share an example code of using one of the pretrained models like Resnet50 or VGGnet? My flask app only works on the development server and stop running as soon as I use nginx with uwsgi on production with multi threading. I am currently using this image https://hub.docker.com/r/tiangolo/uwsgi-nginx/ in production and here is my code for making prediction. I am importing the predict_class function to make predictions on uploaded files from the main.py file
|
There appears to be a bug in the make_predict_function for the tensorflow backend. The following error message appears for me when trying to call model.predict(...)
This does not happen when using the theano backend.
Notes: The model is loaded from json, and is defined as follows:
pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps
pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps
The text was updated successfully, but these errors were encountered: