New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Layers for MobileNet from TensorFlow #9517
Conversation
@@ -343,6 +343,12 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN | |||
static Ptr<ReLULayer> create(const LayerParams ¶ms); | |||
}; | |||
|
|||
class CV_EXPORTS ReLU6Layer : public ActivationLayer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we make the layer slightly more universal? result(x, y, c) = min(max(src(x, y, c), a), b)
with customizable a
and b
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generalized. Name of layer is the same.
v_float32x4 x1 = v_load(srcptr + i + 4); | ||
v_float32x4 x2 = v_load(srcptr + i + 8); | ||
v_float32x4 x3 = v_load(srcptr + i + 12); | ||
x0 = v_select(x0 >= z, x0, z); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use more efficient v_min
and v_max
instead of v_select
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, thanks!
} | ||
kshape[0] = inCh * chMultiplier; | ||
kshape[1] = 1; | ||
} | ||
layerParams.set("kernel_h", kshape[2]); | ||
layerParams.set("kernel_w", kshape[3]); | ||
layerParams.set("num_output", kshape[0]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
set some min_value=0
and max_value=6
here
void attachHalide(const Halide::Expr& input, Halide::Func& top) | ||
{ | ||
Halide::Var x("x"), y("y"), c("c"), n("n"); | ||
top(x, y, c, n) = min(max(minValue, input), maxValue); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
clamp(input, minValue, maxValue) ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, thank you!
} | ||
#endif // HAVE_HALIDE | ||
|
||
int64 getFLOPSPerElement() const { return 1; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 ?
Tried your patch and it works, except one thing. https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/placeholder-with-default Tried removing that layer from model, but it seems without it model fails with cvassert on "add" operation later on. |
@quarcko, could you please provide some way to reproduce it? I think we could resolve it faster if there were some steps like at the PR's topic. |
Sure, I used this blog post to train my model: Used model is "mobilenet_1.0_224" After "retraining" the model as i mentioned there is added new unsupported layer. Here i will attach my retrained sample model so you can test it without doing the training part. You will notice, that after this model will fail at "PlaceholderWithDefault" https://www.dropbox.com/s/r1u6w52flwgt8ft/output_graph.pb?dl=0 |
@quarcko, Could you try it again? The necessary changes were made. There are transformations that must be applied to referenced model: ~/tensorflow/bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph=output_graph.pb \
--out_graph=transformed_graph.pb \
--inputs=input \
--outputs=final_result \
--transforms="fold_constants sort_by_execution_order remove_nodes(op=Squeeze, op=PlaceholderWithDefault)" |
@dkurt, cannot merge the patch because of conflicts, could you please fix it? |
2dd26a1
to
fe9229c
Compare
@vpisarev, the conflicts were solved. |
👍 |
@dkurt I met the problem : unsupported layer:PlaceholderWithDefault |
@sEasonsQAQ, can you try to add an extra NHWC->NCHW permutation node before the first MatMul's Reshape as described at http://answers.opencv.org/question/180474/dnn-different-results-between-version-330-and-331/? |
@dkurt ,thanks for your help, and I tested per what you suggested,but the result is still not corrected
and got 1.28523e-07 maximal absolute difference; in another c++ tf test code,it returns an expected result: class 'b', prob:0.99+; are my test steps wrong?kindly help with this condition,thanks! another c++ code:
|
@sEasonsQAQ did you find a way to get your retrained model running correctly with opencv dnn module ? |
@sEasonsQAQ @naguirre I'm also seeing this exact behavior. It seems to test fine on the original training dataset, but real-world samples don't predict correctly, however the same model with TensorFlow Mobile works fine. We should probably create a bug report so this can be tracked. |
@mevatron, May I ask you to open a topic at http://answers.opencv.org? Please make it as much reproducible as possible. Do not insert cross references to old questions or issues. The best way is to attach a |
@dkurt As requested I posted a detailed write-up to reproduce the issue we are seeing here: http://answers.opencv.org/question/185283/opencv_dnn-provides-incorrect-inferences-after-transform_graph/ Any of your insights would be greatly appreciated! Thanks for your time! |
@naguirre sorry for the delay, I just transformed the tensorflow-1.2.0 retrained .pb: |
@sEasonsQAQ Interesting, I'm using tensorflow master, I wonder if that might be the source of my issues. |
This pullrequest changes
resolves #9462 (waiting for feedback)
depthwise_conv2d
layer from TensorFlow (convolution with #groups == #input_channels)Mul
andAdd
supportMerge with extra: opencv/opencv_extra#370
How to run MobileNet using DNN:
Go to https://github.com/tensorflow/models/blob/master/slim/nets/mobilenet_v1.md and download checkpoint for
MobileNet_v1_1.0_224
model. Unpack and navigate into the folder that contains:Create
.pb
model by:python ~/tensorflow/tensorflow_models/slim/export_inference_graph.py \ --model_name=mobilenet_v1 \ --output_file=mobilenet_v1.pb \ --image_size=224
source: https://github.com/tensorflow/models/blob/master/slim/README.md#exporting-the-inference-graph
Freeze
python ~/tensorflow/tensorflow/python/tools/freeze_graph.py \ --input_graph=mobilenet_v1.pb \ --input_checkpoint=mobilenet_v1_1.0_224.ckpt \ --output_graph=mobilenet_v1_frozen.pb \ --output_node_names=MobilenetV1/Predictions/Softmax \ --input_binary
source: https://github.com/tensorflow/models/blob/master/slim/README.md#freezing-the-exported-graph
Modify for DNN: fuse batch normalizations and remove
Squeeze
op.Enjoy with DNN:
output:
And if I'm right and MobileNet uses 0th class as None, 97th class is a toucan (see synset_words.txt)