Skip to content

Tensorflow Lite demo app with inception-v3/Mobilenet_v1 (float) model crashes #14719

Closed
@atrah22

Description

@atrah22

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 14.04
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (use command below): 14.04
  • Python version: 3.4.3
  • Bazel version (if compiling from source): 0.5.4

Describe the problem

Device: Galaxy S8
I downloaded the "Inception V3 Slim 2016" from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/g3doc/models.md . I pushed the imagenet_2015_label_strings.txt and the "inceptionv3_non_slim_2015.tflite" to the asset folder

I edited the ImageClassifier.java of tflite demo app. The changes are the followings:
private static final String MODEL_PATH = "/inceptionv3_non_slim_2015.tflite";
static final int DIM_IMG_SIZE_X = 299;
static final int DIM_IMG_SIZE_Y = 299;

The app hangs when it starts! (I could run the app with the default mobilenet quantized graph).
Similar is the case with mobilenet_v1_224_Float graph as well (the app hangs or crashes). I assume, the float model graph is not yet supported by TF Lite. However, in the documentation its written that it does support float for most operations. I am thinking the error is due to image pre-processing output and input size of float model grpah. The error log is stated below:

The Error log:
11-21 14:31:43.034 2111-2416/android.example.com.tflitecamerademo E/AndroidRuntime: FATAL EXCEPTION: CameraBackground
Process: android.example.com.tflitecamerademo, PID: 2111
java.lang.IllegalArgumentException: Failed to get input dimensions. 0-th input should have 1072812 bytes, but found 268203 bytes.
at org.tensorflow.lite.NativeInterpreterWrapper.getInputDims(Native Method)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:82)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:112)
at org.tensorflow.lite.Interpreter.run(Interpreter.java:93)
at com.example.android.tflitecamerademo.ImageClassifier.classifyFrame(ImageClassifier.java:112)
at com.example.android.tflitecamerademo.Camera2BasicFragment.classifyFrame(Camera2BasicFragment.java:663)
at com.example.android.tflitecamerademo.Camera2BasicFragment.-wrap0(Camera2BasicFragment.java)
at com.example.android.tflitecamerademo.Camera2BasicFragment$4.run(Camera2BasicFragment.java:558)
at android.os.Handler.handleCallback(Handler.java:751)
at android.os.Handler.dispatchMessage(Handler.java:95)
at android.os.Looper.loop(Looper.java:154)
at android.os.HandlerThread.run(HandlerThread.java:61)

Additional Questions:

  1. On the app the the tensorflow lite graph format is ".tflite". However, on the documentation https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/toco/g3doc/cmdline_examples.md the format is written as ".lite"

Activity

changed the title [-]Tensorflow Lite demo app with inception-v3 model fails[/-] [+]Tensorflow Lite demo app with inception-v3/Mobilenet_v1 (float) model crashes[/+] on Nov 21, 2017
pkurogerjs

pkurogerjs commented on Nov 21, 2017

@pkurogerjs

I met the same problem and I think I've found how to solve it.
First of all, the demo app is designed to use the quantized model, in which the input type and parameters' type are both 8-bit. But for the float version, the input type should be 32-bit. You can see from your error log that the expected bytes is exactly 4 times of the actual bytes!

So I think you need to modify the input type of the demo app to fit the 32-bit input.

atrah22

atrah22 commented on Nov 21, 2017

@atrah22
Author

Thank you for the reply. @pkurogerjs
However, The image data is already converted to floating point in the TFLiteDemo ImageClassifier.java file

"/** Writes Image data into a {@code ByteBuffer}. */
private void convertBitmapToByteBuffer(Bitmap bitmap) {
if (imgData == null) {
return;
}
imgData.rewind();
bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
// Convert the image to floating point.
int pixel = 0;
long startTime = SystemClock.uptimeMillis();
for (int i = 0; i < DIM_IMG_SIZE_X; ++i) {
for (int j = 0; j < DIM_IMG_SIZE_Y; ++j) {
final int val = intValues[pixel++];
imgData.put((byte) ((val >> 16) & 0xFF));
imgData.put((byte) ((val >> 8) & 0xFF));
imgData.put((byte) (val & 0xFF));
}
}
"

pkurogerjs

pkurogerjs commented on Nov 22, 2017

@pkurogerjs

Well, I don't agree with the comment in the code. It just extract the RGB value from a pixel into 8-bit bytes.
You can see in ImageClassifier.java that the line "tflite.run(imgData, labelProbArray);" takes a 8-bit format input and a 8-bit format output tensors. You need to change both of them to 32-bit float arrays.

I've successfully run the mobilenet_224 float version. the input is a 4-dim float array and the output put is a 1*num_of_label_types array.

I found this problem by reading the source code in the directory org.tenslow.lite.

Hope this will help you.

pkurogerjs

pkurogerjs commented on Nov 22, 2017

@pkurogerjs

I think this exception is because your labelProbArray(line 77) is still a byte array. Change its type to float, please. The exception is thrown when calling something like input.copyTo(output) in Tensorflow Lite's java code, which requires that the type input and output should be the same.

Besides, it seems that for inception v3, the input image size should be 299*299.
I ran this model, the recognition results seems right, but the output probabilities are larger than 1.0. I'm confused about that

atrah22

atrah22 commented on Nov 22, 2017

@atrah22
Author

@pkurogerjs, Thankyou for the help. It works. I am initially more eager towards performance benchmarking between quantized vs float models.

kidsung

kidsung commented on Nov 24, 2017

@kidsung

Any difference between Slim and non-slim one? I have had also the output probabilities are larger than 1.0. and the recognition does work well. It does not return a right answer
@atrah22 Could you share your moditication?
I hvae changed
imgData = new float[DIM_BATCH_SIZE][DIM_IMG_SIZE_X][DIM_IMG_SIZE_Y][DIM_PIXEL_SIZE];
labelProbArray = new float[1][labelList.size()];

new AbstractMap.SimpleEntry<>(labelList.get(i), (labelProbArray[0][i])));

drricksanchez321

drricksanchez321 commented on Nov 29, 2017

@drricksanchez321

I was able to get it working with inception v3 non-slim 2015 by making @kidsung 's changes plus:

replacing the imgData.put's with:

 imgData[0][i][j][0] = (float) ((val >> 16) & 0xFF);
 imgData[0][i][j][1] = (float) ((val >> 8) & 0xFF);
 imgData[0][i][j][2] = (float) (val  & 0xFF);

commenting out

imgData.rewind();

and

imgData.order(ByteOrder.nativeOrder());

and modifying the declarations:

private float[][][][] imgData = null;
private float[][] labelProbArray = null;

Additionally, I received a tensor length error for the labels, the file had 1001 and (somewhere) it expected 1008, so I filled in 7 lines of foo1, foo2, etc.

aselle

aselle commented on Nov 29, 2017

@aselle
Contributor

Seems like the issue is resolved. Could you please close it if it is?

OdingdongO

OdingdongO commented on Dec 4, 2017

@OdingdongO

After modifying ImageClassifier.java , I can run float model from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/g3doc/models.md;
test the Inception model(Inception V3 Slim 2016)and Mobilenet model(Mobilenet 1.0 224 Float)
the inceptionv3_slim_2016.tflite model inference time is 2260ms,
the mobilenet_v1_1.0_224.tflite model inference time is 500ms and mobilenet_quant_v1_224.tflite is 65ms.
add:
private static final int IMAGE_MEAN = 128;
private static final float IMAGE_STD = 128.0f;
change:
private byte[][] labelProbArray = null;
.............................................................
private float[][] labelProbArray = null;

imgData = ByteBuffer.allocateDirect(
DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE);
................................................................
imgData = ByteBuffer.allocateDirect(
4*DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE);

labelProbArray = new byte[1][labelList.size()];
.............................................................
labelProbArray = new float[1][labelList.size()];

imgData.put((byte) ((val >> 16) & 0xFF));
imgData.put((byte) ((val >> 8) & 0xFF));
imgData.put((byte) (val & 0xFF));
.............................................................
imgData.putFloat((((val >> 16) & 0xFF)-IMAGE_MEAN)/IMAGE_STD);
imgData.putFloat((((val >> 8) & 0xFF)-IMAGE_MEAN)/IMAGE_STD);
imgData.putFloat(((val & 0xFF)-IMAGE_MEAN)/IMAGE_STD);

if use the inception model,modifying
static final int DIM_IMG_SIZE_X = 224;
static final int DIM_IMG_SIZE_Y = 224;
.............................................................
static final int DIM_IMG_SIZE_X = 299;
static final int DIM_IMG_SIZE_Y = 299;

austingg

austingg commented on Dec 10, 2017

@austingg

@atrah22 Is there any result about quantized mode vs. float model ?

22 remaining items

Loading
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @aselle@offbye@austingg@freedomtan@Johnson145

        Issue actions

          Tensorflow Lite demo app with inception-v3/Mobilenet_v1 (float) model crashes · Issue #14719 · tensorflow/tensorflow