-
Notifications
You must be signed in to change notification settings - Fork 74.7k
Wrong result when computing accuracy using tf.metrics.accuracy #15115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
But the result in evaluation mode is correct while wrong in train mode. I'm using def cifar_model_fn(features, labels, mode):
# some other codes
logits = tf.layers.dense(inputs=dropout, units=10)
predictions = {
'classes': tf.argmax(input=logits, axis=1, name='classes'),
'probabilities': tf.nn.softmax(logits, name='softmax_tensor')
}
onehot_labels = tf.one_hot(indices=tf.cast(labels, tf.int32), depth=10)
loss = tf.losses.softmax_cross_entropy(onehot_labels, logits)
accuracy, update_op = tf.metrics.accuracy(labels=labels, predictions=predictions['classes'], name='accuracy')
# my way to compute accuracy, which gives correct result when training
my_acc = tf.reduce_mean(tf.cast(tf.equal(tf.cast(labels, tf.int64), predictions['classes']), tf.float32))
if mode == tf.estimator.ModeKeys.TRAIN:
tensors_to_log = {
'Accuracy': accuracy,
'My accuracy': my_acc}
logging_hook = tf.train.LoggingTensorHook(tensors=tensors_to_log, every_n_iter=100)
optimizer = tf.train.AdamOptimizer(learning_rate=FLAGS.learning_rate)
train_op = optimizer.minimize(loss=loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op, training_hooks=[logging_hook])
eval_metric_ops = {
'accuracy': (accuracy, update_op)
}
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
def main():
# create dataset ....
cifar10_classifier = tf.estimator.Estimator(model_fn=cifar_model_fn, model_dir=FLAGS.model_dir)
cifar10_classifier.train(input_fn=train_input_fn)
eval_results = cifar10_classifier.evaluate(input_fn=eval_input_fn) My purpose is to log training accuracy when training. The following is a part of the console output.
|
Beware that tf.metrics.accuracy is computed over an entire session run, not per batch. I just kicked it out because I did not get the utility of it. Instead, I'm doing the same thing as you with reduce_mean to have batch accuracy. |
@reedwm @jmaye Thank you for your reply. I read #9498 and understand that The doc says:
but doesn't say when the accuracy is computed, i.e. divides In my codes above, the accuracy is computed when finishing evaluation progress. But is it supposed to computing the accuracy when finishing the entire training progress? |
How When In your Estimators case, |
@reedwm Thank you for your detailed explanation. I got the idea. |
@reedwm @jmaye
output |
@dchatterjee172, can you provide a self-contained, relatively short example I can run to reproduce? It does seem like |
@reedwm |
Even with the cast in |
It can be helpful when there is a lot of oscillation in batch accuracy. The aggregated accuracy will be a lot smoother and so it's easier to observe learning from it's values. |
why the value returned by tf.metrics.accuracy is always 0? |
System information
Describe the problem
I found the result
tf.metrics.accuracy
returns is incorrect when I trained my model. To verify this I wrote a simple program.You can see that
acc
andmy_acc
is different and acc is wrong. I double checked the doc and still confused. Is there anything I missed? Thank you.The text was updated successfully, but these errors were encountered: