New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
control_dependencies and assign new shape not working (using validate_shape=False) #7782
Comments
Interesting. I modified your program as follows:
If "y = x" is used instead of "y = assign_op", I get this output:
Setting "y = assign_op", I get what you expected.
It appears as though the control_dependencies construct correctly forces assign_op to execute, but the new value isn't really accessible to an evaluation of x until later. This surpasses my understanding. Summon @mrry. |
This is a subtle corner of the What does it mean for a value to be "cached"? In the code, it's fed to a However, when you assign a tensor of a different shape to a How can you avoid this? One way is to force an explicit import tensorflow as tf
# I define a "shape-able" Variable
x = tf.Variable([], dtype=tf.int32, validate_shape=False, trainable=False)
# I build a new shape and assign it to x
concat = tf.concat([x, [0]], 0)
assign_op = tf.assign(x, concat, validate_shape=False)
with tf.control_dependencies([assign_op]):
# I print x after the assignment
# Note that the Print call is on "x" and NOT "assign_op"
new_x = x.read_value()
print_op_dep = tf.Print(new_x, data=[new_x], message="print_op_dep:")
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(3):
sess.run(print_op_dep) The output is:
|
Of course, these semantics are not very intuitive, and there's no way you'd have guessed that from the documentation. @alextp is working on a new version of variables that will have more sensible semantics. I'll let him comment on how things will look in the brave new world. |
👍 Also, one last question on my side: |
Glad to hear it! As for the performance impact, it's hard to say. At the level of individual assign calls, TensorFlow doesn't do much to optimize your code, so you aren't necessarily missing any optimizations. Concatenating and copying like you do in that code snippet will have quadratic time complexity, but I'm not sure if you're going to be doing that in such a tight loop that it matters :). (If you find yourself concatenating dynamic lists of tensors a lot, you might be interested in It is possible that having varying-shape variables will lead to e.g. more unknowns in shape inference, which could inhibit some nice optimizations that are possible when the shape of a tensor is static, but I assume you have a reason for wanting to change the shape of a variable, so some amount of dynamism is probably necessary. |
All right, thanks for those insights! |
Environment info
Operating System: OSX on CPU
Tensorflow 1.0.0
Problem
Hello, i've been trying to use
tf.assign
with atf.control_dependencies
scheme when changing the shape on the fly.Outputs:
I would expect:
Is this a bug ?
The text was updated successfully, but these errors were encountered: