Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.5.0 wheel install on Mac OS X using Homebrew python broken #11

Closed
iamed2 opened this issue Nov 9, 2015 · 21 comments
Closed

0.5.0 wheel install on Mac OS X using Homebrew python broken #11

iamed2 opened this issue Nov 9, 2015 · 21 comments

Comments

@iamed2
Copy link

iamed2 commented Nov 9, 2015

This is what happens:

~: pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
Collecting tensorflow==0.5.0 from https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
  Downloading https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl (9.8MB)
    100% |████████████████████████████████| 9.8MB 51kB/s
Collecting six>=1.10.0 (from tensorflow==0.5.0)
  Downloading six-1.10.0-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): numpy>=1.9.2 in /usr/local/lib/python2.7/site-packages (from tensorflow==0.5.0)
Installing collected packages: six, tensorflow
  Found existing installation: six 1.9.0
    Uninstalling six-1.9.0:
      Successfully uninstalled six-1.9.0
Successfully installed six-1.10.0 tensorflow-0.5.0
~: python
Python 2.7.10 (default, Jul 13 2015, 12:05:58)
[GCC 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/site-packages/tensorflow/__init__.py", line 4, in <module>
    from tensorflow.python import *
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 13, in <module>
    from tensorflow.core.framework.graph_pb2 import *
  File "/usr/local/lib/python2.7/site-packages/tensorflow/core/framework/graph_pb2.py", line 16, in <module>
    from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
  File "/usr/local/lib/python2.7/site-packages/tensorflow/core/framework/attr_value_pb2.py", line 16, in <module>
    from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
  File "/usr/local/lib/python2.7/site-packages/tensorflow/core/framework/tensor_pb2.py", line 16, in <module>
    from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
  File "/usr/local/lib/python2.7/site-packages/tensorflow/core/framework/tensor_shape_pb2.py", line 22, in <module>
    serialized_pb=_b('\n,tensorflow/core/framework/tensor_shape.proto\x12\ntensorflow\"d\n\x10TensorShapeProto\x12-\n\x03\x64im\x18\x02 \x03(\x0b\x32 .tensorflow.TensorShapeProto.Dim\x1a!\n\x03\x44im\x12\x0c\n\x04size\x18\x01 \x01(\x03\x12\x0c\n\x04name\x18\x02 \x01(\tb\x06proto3')
TypeError: __init__() got an unexpected keyword argument 'syntax'
@vrv
Copy link

vrv commented Nov 9, 2015

Okay, I was able to reproduce this in virtualenv by installing protobuf==2.6.1

The short answer is that we depend on protobuf 3.0.0, and having the protobuf pip library installed seems to interfere with ours.

Two solutions:

  1. pip install protobuf=3.0.0a1 or higher, then pip install tensorflow
  2. pip uninstall protobuf first, and then install tensorflow again -- it should bring in the dependency

I was able to do both in virtualenv and they worked -- let me know if either suffices for you. We might add protobuf >= 3 to our whl dependencies if so.

@flyingmutant
Copy link

Tensorflow does not bring dependency on protobuf:

gp@MacBook-Pro-Gregory ~> pip list
mercurial (3.5.2)
pip (7.1.2)
setuptools (18.5)
vboxapi (1.0)
wheel (0.26.0)

gp@MacBook-Pro-Gregory ~> pip install --no-cache-dir https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
Collecting tensorflow==0.5.0 from https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
  Downloading https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl (9.8MB)
    100% |████████████████████████████████| 9.8MB 13.7MB/s
Collecting six>=1.10.0 (from tensorflow==0.5.0)
  Downloading six-1.10.0-py2.py3-none-any.whl
Collecting numpy>=1.9.2 (from tensorflow==0.5.0)
  Downloading numpy-1.10.1-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (3.7MB)
    100% |████████████████████████████████| 3.7MB 9.5MB/s
Installing collected packages: six, numpy, tensorflow
Successfully installed numpy-1.10.1 six-1.10.0 tensorflow-0.5.0

gp@MacBook-Pro-Gregory ~> pip list
mercurial (3.5.2)
numpy (1.10.1)
pip (7.1.2)
setuptools (18.5)
six (1.10.0)
tensorflow (0.5.0)
vboxapi (1.0)
wheel (0.26.0)

Installing protobuf3 before does not help, either:

gp@MacBook-Pro-Gregory ~> pip list
mercurial (3.5.2)
pip (7.1.2)
setuptools (18.5)
vboxapi (1.0)
wheel (0.26.0)

gp@MacBook-Pro-Gregory ~> pip install protobuf==3.0.0a1
Collecting protobuf==3.0.0a1
Requirement already satisfied (use --upgrade to upgrade): setuptools in /usr/local/lib/python2.7/site-packages (from protobuf==3.0.0a1)
Installing collected packages: protobuf
Successfully installed protobuf-3.0.0a1

gp@MacBook-Pro-Gregory ~> pip install --no-cache-dir https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
Collecting tensorflow==0.5.0 from https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
  Downloading https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl (9.8MB)
    100% |████████████████████████████████| 9.8MB 65.8MB/s
Collecting six>=1.10.0 (from tensorflow==0.5.0)
  Downloading six-1.10.0-py2.py3-none-any.whl
Collecting numpy>=1.9.2 (from tensorflow==0.5.0)
  Downloading numpy-1.10.1-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (3.7MB)
    100% |████████████████████████████████| 3.7MB 3.4MB/s
Installing collected packages: six, numpy, tensorflow
Successfully installed numpy-1.10.1 six-1.10.0 tensorflow-0.5.0

gp@MacBook-Pro-Gregory ~> python
Python 2.7.10 (default, Nov  9 2015, 20:28:23)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/site-packages/tensorflow/__init__.py", line 4, in <module>
    from tensorflow.python import *
  File "/usr/local/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 13, in <module>
    from tensorflow.core.framework.graph_pb2 import *
  File "/usr/local/lib/python2.7/site-packages/tensorflow/core/framework/graph_pb2.py", line 16, in <module>
    from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
  File "/usr/local/lib/python2.7/site-packages/tensorflow/core/framework/attr_value_pb2.py", line 16, in <module>
    from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
  File "/usr/local/lib/python2.7/site-packages/tensorflow/core/framework/tensor_pb2.py", line 16, in <module>
    from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
  File "/usr/local/lib/python2.7/site-packages/tensorflow/core/framework/tensor_shape_pb2.py", line 22, in <module>
    serialized_pb=_b('\n,tensorflow/core/framework/tensor_shape.proto\x12\ntensorflow\"d\n\x10TensorShapeProto\x12-\n\x03\x64im\x18\x02 \x03(\x0b\x32 .tensorflow.TensorShapeProto.Dim\x1a!\n\x03\x44im\x12\x0c\n\x04size\x18\x01 \x01(\x03\x12\x0c\n\x04name\x18\x02 \x01(\tb\x06proto3')
TypeError: __init__() got an unexpected keyword argument 'syntax'

Unless I am missing something, neither of the solutions work for me. Latest OS X, python2 + python3 installed from Homebrew.

@alejandro-isaza
Copy link

This fixed the problem for me:

pip uninstall protobuf
pip uninstall tensorflow
brew uninstall protobuf
pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl

@vrv
Copy link

vrv commented Nov 9, 2015

flyingmutant: to appease my curiosity, can you let us know what you get when you do the following in python:

import google.protobuf
>>> print google.protobuf.__version__
3.0.0a4

(it's possible google.protobuf.version doesn't exist either in one of the environments I've tested).

Basically the symptom of the problem is that python is finding the older version of the protobuf library that doesn't support the generated python proto3 we require.

@VikramTiwari
Copy link
Contributor

Found the same issue on my end.

pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
Collecting tensorflow==0.5.0 from https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
  Using cached https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
Collecting six>=1.10.0 (from tensorflow==0.5.0)
  Using cached six-1.10.0-py2.py3-none-any.whl
Collecting numpy>=1.9.2 (from tensorflow==0.5.0)
  Using cached numpy-1.10.1-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl
Installing collected packages: six, numpy, tensorflow
  Found existing installation: six 1.4.1
    DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
    Uninstalling six-1.4.1:
Exception:
Traceback (most recent call last):
  File "/Library/Python/2.7/site-packages/pip-7.1.2-py2.7.egg/pip/basecommand.py", line 211, in main
    status = self.run(options, args)
  File "/Library/Python/2.7/site-packages/pip-7.1.2-py2.7.egg/pip/commands/install.py", line 311, in run
    root=options.root_path,
  File "/Library/Python/2.7/site-packages/pip-7.1.2-py2.7.egg/pip/req/req_set.py", line 640, in install
    requirement.uninstall(auto_confirm=True)
  File "/Library/Python/2.7/site-packages/pip-7.1.2-py2.7.egg/pip/req/req_install.py", line 716, in uninstall
    paths_to_remove.remove(auto_confirm)
  File "/Library/Python/2.7/site-packages/pip-7.1.2-py2.7.egg/pip/req/req_uninstall.py", line 125, in remove
    renames(path, new_path)
  File "/Library/Python/2.7/site-packages/pip-7.1.2-py2.7.egg/pip/utils/__init__.py", line 315, in renames
    shutil.move(old, new)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move
    copy2(src, real_dst)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2
    copystat(src, dst)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat
    os.chflags(dst, st.st_flags)
OSError: [Errno 1] Operation not permitted: '/var/folders/_f/95xpdnrx4yncxz91dt53mvfh0000gn/T/pip-ZXxpkX-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'

This issue (pypa/pip#3165) has more insight into the same. And according to this its the issue with six library being preinstalled in El Captain systems.

Moreover there seems to be no solution as of now except installing python locally using brew rather than using pre-installed package by Apple.

@flyingmutant
Copy link

@vrv I've got AttributeError: 'module' object has no attribute '__version__' for both 3.0.0a1 and 3.0.0a3 (which is the latest you can install with pip).

I've found what fixes the issue for me:

$ brew uninstall protobuf
$ pip uninstall protobuf
$ pip install 'protobuf>=3.0.0a3'
$ pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl

It appears that Homebrew's protobuf package was causing the issues. Removing it, then installing protobuf3 with pip does the trick.

@dvj
Copy link

dvj commented Nov 9, 2015

For the protobuf issue above on OS X, not only should you pip uninstall protobuf to get rid of protobuf 2.x, but make sure you brew uninstall protobuf if you are using homebrew.

@VikramTiwari : that is a different issue, already addressed in the "common problems" section of the 'Get Started' page

@VikramTiwari
Copy link
Contributor

@dvj Yup! Got it just now. Instructions by @flyingmutant did the trick.

@vrv
Copy link

vrv commented Nov 9, 2015

Thanks for the help debugging everyone! We'll try to update our 'common problems' section soon to include these suggestions.

@Yangqing
Copy link
Contributor

Yangqing commented Nov 9, 2015

By the way, if you would like to use homebrew, you can install protobuf 3 via:

brew install --devel protobuf

For more details please refer to grpc/grpc-experiments#162

@iamed2
Copy link
Author

iamed2 commented Nov 9, 2015

I was able to solve this by doing brew reinstall --devel protobuf, which installs 3.0.0a1. Feel free to close the issue whenever you update the docs.

@liuyipei
Copy link
Contributor

I have the same problem, but my protobuf is apparently up-to-date?

Here is the output to pip show numpy protobuf

Name: numpy
Version: 1.10.1
Location: /mnt/4tb_internal/python/tensorflow/lib/python2.7/site-packages
Requires: 

Name: protobuf
Version: 3.0.0a3
Location: /mnt/4tb_internal/python/tensorflow/lib/python2.7/site-packages
Requires: setuptools

Here is the output to uname -a. My machine runs Ubuntu 14.04 LTS.

Linux yi-2014 3.13.0-64-generic #104-Ubuntu SMP Wed Sep 9 12:36:12 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Here are the details of the output:

>>> import tensorflow
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/mnt/4tb_internal/python/tensorflow/local/lib/python2.7/site-packages/tensorflow/__init__.py", line 4, in <module>
    from tensorflow.python import *
  File "/mnt/4tb_internal/python/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 13, in <module>
    from tensorflow.core.framework.graph_pb2 import *
  File "/mnt/4tb_internal/python/tensorflow/local/lib/python2.7/site-packages/tensorflow/core/framework/graph_pb2.py", line 16, in <module>
    from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
  File "/mnt/4tb_internal/python/tensorflow/local/lib/python2.7/site-packages/tensorflow/core/framework/attr_value_pb2.py", line 16, in <module>
    from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
  File "/mnt/4tb_internal/python/tensorflow/local/lib/python2.7/site-packages/tensorflow/core/framework/tensor_pb2.py", line 16, in <module>
    from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
  File "/mnt/4tb_internal/python/tensorflow/local/lib/python2.7/site-packages/tensorflow/core/framework/tensor_shape_pb2.py", line 22, in <module>
    serialized_pb=_b('\n,tensorflow/core/framework/tensor_shape.proto\x12\ntensorflow\"d\n\x10TensorShapeProto\x12-\n\x03\x64im\x18\x02 \x03(\x0b\x32 .tensorflow.TensorShapeProto.Dim\x1a!\n\x03\x44im\x12\x0c\n\x04size\x18\x01 \x01(\x03\x12\x0c\n\x04name\x18\x02 \x01(\tb\x06proto3')
TypeError: __init__() got an unexpected keyword argument 'syntax'
>>> 

@liuyipei
Copy link
Contributor

My thread got closed due to being similar to this, but I am not on Mac! I would very much appreciate it if we can leave this one open until this gets resolved!

@vrv
Copy link

vrv commented Nov 10, 2015

Woops, sorry liuyipei@! Will re-open the original one. It's the same issue, just on a different platform.

@vrv
Copy link

vrv commented Nov 13, 2015

We've updated our docs to what we believe can help this -- closing this bug for now but we can re-open if there are some other related issues unaddressed

@vrv vrv closed this as completed Nov 13, 2015
@brutuselvis
Copy link

I have a similar issue on my mac running the eval lua command (th eval.lua -model .. /model_id1-501-1448236541.t7 -image_folder .. /img -num_images 1) on a small folder of ten images.

I am in the neurotalk2 folder but nonetheless I get:

/Users/localadmin/torch/install/bin/luajit: ...rs/localadmin/torch/install/share/lua/5.1/trepl/init.lua:383: module 'misc.utils' not found:No LuaRocks module found for misc.utils
no field package.preload['misc.utils']
no file '/Users/localadmin/.luarocks/share/lua/5.1/misc/utils.lua'
no file '/Users/localadmin/.luarocks/share/lua/5.1/misc/utils/init.lua'
no file '/Users/localadmin/torch/install/share/lua/5.1/misc/utils.lua'
no file '/Users/localadmin/torch/install/share/lua/5.1/misc/utils/init.lua'
no file './misc/utils.lua'
no file '/Users/localadmin/torch/install/share/luajit-2.1.0-beta1/misc/utils.lua'
no file '/usr/local/share/lua/5.1/misc/utils.lua'
no file '/usr/local/share/lua/5.1/misc/utils/init.lua'
no file '/Users/localadmin/.luarocks/lib/lua/5.1/misc/utils.so'
no file '/Users/localadmin/torch/install/lib/lua/5.1/misc/utils.so'
no file './misc/utils.so'
no file '/usr/local/lib/lua/5.1/misc/utils.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file '/Users/localadmin/.luarocks/lib/lua/5.1/misc.so'
no file '/Users/localadmin/torch/install/lib/lua/5.1/misc.so'
no file './misc.so'
no file '/usr/local/lib/lua/5.1/misc.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
stack traceback:
[C]: in function 'error'
...rs/localadmin/torch/install/share/lua/5.1/trepl/init.lua:383: in function 'require'
eval.lua:7: in main chunk
[C]: in function 'dofile'
...dmin/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x010f243bc0

Any suggestions what I m doing wrong?

I traced back all the steps (except pointing the library path to the cudnn - I didn not get that).

@yoni
Copy link
Contributor

yoni commented Dec 11, 2015

I got the same syntax keyword error with both 0.5 and 0.6 versions of tensorflow.

I'm using a virtualenv setup. Oddly, I got the error within ipython sessions, but didn't get the error when running python. My solution was to remove ipython from the system install and install it in the virtualenv environment, instead. Hope this helps folks sole their install issues.

Relevant IPython warning that pointed me to this solution:

$ ipython
WARNING: Attempting to work in a virtualenv. If you encounter problems, please install IPython inside the virtualenv.
...

@kibtes
Copy link

kibtes commented Dec 18, 2015

I got the same syntax error on the gpu 0.5 version installed using python 2. I'm using a virtualenv setup. The other i am not understanding is that when i wrote the 'pip show numpy protobuf ' command the result was as follows

Name: numpy
Version: 1.8.2
Location: /usr/lib/python2.7/dist-packages

Requires:

Name: protobuf
Version: 2.5.0
Location: /usr/lib/python2.7/dist-packages
Requires:
where as while importing the google.protobuf and printing the google.protobuf version the result was as follows
(tensorflow)root@dmlserver:/home/rzibello/Documents# python
Python 2.7.6 (default, Jun 22 2015, 17:58:13)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.

import google.protobuf
print google.protobuf.version
3.0.0-alpha-1

The OS i am using Linux Ubuntu 14.04. can sombody please let me know their difference and what i could do get rid off the syntax error as well?

@timestocome
Copy link

This is how I installed tensorflow on OSX 10.11.3

bash-3.2$ su
Password:
sh-3.2# whoami
root
sh-3.2# python --version
Python 2.7.11 :: Anaconda 2.4.1 (x86_64)
sh-3.2# python -m pip install https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
Collecting tensorflow==0.5.0 from https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl
Downloading https://storage.googleapis.com/tensorflow/mac/tensorflow-0.5.0-py2-none-any.whl (9.8MB)
100% |████████████████████████████████| 9.8MB 35kB/s
Requirement already satisfied (use --upgrade to upgrade): six>=1.10.0 in ./anaconda2/lib/python2.7/site-packages (from tensorflow==0.5.0)
Requirement already satisfied (use --upgrade to upgrade): numpy>=1.9.2 in ./anaconda2/lib/python2.7/site-packages (from tensorflow==0.5.0)
Installing collected packages: tensorflow
Successfully installed tensorflow-0.5.0

acharal pushed a commit to acharal/tensorflow that referenced this issue May 20, 2019
pooyadavoodi pushed a commit to pooyadavoodi/tensorflow that referenced this issue Oct 16, 2019
Add use_explicit_batch parameter available in OpConverterParams and other places

Formatting and make const bool everywhere

Enable use_explicit_batch for TRT 6.0

Revise validation checks to account for use_explicit_batch. Propagate flag to ConversionParams and TRTEngineOp

Rename use_explicit_batch/use_implicit_batch

Formatting

Add simple activtion test for testing dynamic input shapes. Second test with None dims is disabled

Update ConvertAxis to account for use_implicit batch

fix use of use_implicit_batch (tensorflow#7)

* fix use of use_implicit_batch

* change order of parameters in ConvertAxis function

fix build (tensorflow#8)

Update converters for ResNet50 (except Binary ops) (tensorflow#9)

* Update RN50 converters for use_implicit_batch: Conv2D, BiasAdd, Transpose, MaxPool, Squeeze, MatMul, Pad

* Fix compilation errors

* Fix tests

Use TRT6 API's for dynamic shape (tensorflow#11)

* adding changes for addnetworkv2

* add plugin utils header file in build

* optimization profile api added

* fix optimization profile

* TRT 6.0 api changes + clang format

* Return valid errors in trt_engine_op

* add/fix comments

* Changes to make sure activation test passes with TRT trunk

* use HasStaticShape API, add new line at EOF

Allow opt profiles to be set via env variables temporarily.

Undo accidental change

 fix segfault by properly returning the status from OverwriteStaticDims function

Update GetTrtBroadcastShapes for use_implicit_batch (tensorflow#14)

* Update GetTrtBroadcastShapes for use_implicit_batch

* Formatting

Update activation test

Fix merge errors

Update converter for reshape (tensorflow#17)

Allow INT32 for elementwise (tensorflow#18)

Add Shape op (tensorflow#19)

* Add Shape op

* Add #if guards for Shape. Fix formatting

Support dynamic shapes for strided slice (tensorflow#20)

Support dynamic shapes for strided slice

Support const scalars + Pack on constants (tensorflow#21)

Support const scalars and pack with constants in TRT6

Fixes/improvements for BERT (tensorflow#22)

* Support shrink_axis_mask for StridedSlice

* Use a pointer for final_shape arg in ConvertStridedSliceHelper. Use final_shape for unpack/unstack

* Support BatchMatMulV2.

* Remove TODO and update comments

* Remove unused include

* Update Gather for TRT 6

* Update BatchMatMul for TRT6 - may need more changes

* Update StridedSlice shrink_axis for TRT6

* Fix bugs with ConvertAxis, StridedSlice shrink_axis, Gather

* Fix FC and broadcast

* Compile issue and matmul fix

* Use nullptr for empty weights

* Update Slice

* Fix matmul for TRT6

* Use enqueueV2. Don't limit to 1 input per engine

Change INetworkConfig to IBuilderConfig

Allow expand dims to work on dynamic inputs by slicing shape. Catch problems with DepthwiseConv. Don't try to verify dynamic shapes in CheckValidSize (tensorflow#24)

Update CombinedNMS converter (tensorflow#23)

* Support CombinedNMS in non implicit batch mode. The squeeze will not work if multiple dimensions are unknown

* Fix compile error and formatting

Support squeeze when input dims are unknown

Support an additional case of StridedSlice where some dims aren't known

Use new API for createNetworkV2

Fix flag type for createNetworkV2

Use tensor inputs for strided slice

Allow squeeze to work on -1 dims

Add TRT6 checks to new API

spliting ConvertGraphDefToEngine  (tensorflow#29)

* spliting ConvertGraphDefToEngine into ConvertGraphDefToNetwork and BuildEngineFromNetwork

* some compiler error

* fix format

Squeeze Helper function (tensorflow#31)

* Add squeeze helper

* Fix compile issues

* Use squeeze helper for CombinedNMS

Update Split & Unpack for dynamic shapes (tensorflow#32)

* Update Unpack for dynamic shapes

* Fix compilation error

Temporary hack to fix bug in config while finding TRT library

Fix errors from rebasing

Remove GatherV2 limitations for TRT6

Fix BiasAdd elementwise for NCHW case with explicit batch mode (tensorflow#34)

Update TRT6 headers, Make tests compile (tensorflow#35)

* Change header files for TRT6 in configure script

* Fix bug with size of scalars. Use implicit batch mode based on the converter flag when creating network

* Fix compilation of tests and Broadcast tests

Properly fix biasadd nchw (tensorflow#36)

Revert tensorflow#29 to fix weight corruption (tensorflow#37)

* Revert tensorflow#29 to fix weight corruption

* Revert change in test

Fix bug with converters and get all tests passing for TRT6 (tensorflow#39)

Update DepthToSpace and SpaceToTest for TRT6 + dynamic shapes (tensorflow#40)

Add new C++ tests for TRT6 converters (tensorflow#41)

* Remove third shuffle layer since bug with transpose was fixed

* Add new tests for TRT6 features

* Update TRT6 headers list

Fix compilation errors

Remove bazel_build.sh

Enable quantization mnist test back

Disabled by mistake I believe

Remove undesirable changes in quantization_mnist_test

Add code back that was missed during rebase

Fix bug: change "type" to type_key
andrewstevens-infineon pushed a commit to tum-ei-eda/tensorflow that referenced this issue Oct 13, 2020
…eature/AIFORDES-303-performance-comparison to ifx/develop

* commit '1acf4b08d84e46a71918bb601a855f9138ccf223':
  Added comments in the MNIST test
  Unify packing function throughout all supported kernels
  Added tf models for MNIST performance tests
  Added timing test for MNIST example network
  WIP commit
  WIP: End to end example using MNIST - contains errors
  Fixed bug in packing test function. MNIST test model generation now working in python. Needs to be included in C++ Tests.
  Added tests for conv and depthwise conv that compare invoke time for packed and not packed kernel implementations
  Removed logging from the convolutional packed kernel that logged the number of bits used
  Fixed old performance evaluations to work with -Werror flag and not throw errors anymore
keithm-xmos referenced this issue in xmos/tensorflow Feb 1, 2021
jakobkruse1 pushed a commit to jakobkruse1/tensorflow that referenced this issue Mar 2, 2021
copybara-service bot pushed a commit that referenced this issue Dec 23, 2021
On some CI nodes (typically those with higher CPU core counts 128/256), the `//tensorflow/c/eager:c_api_distributed_test_gpu` test fails on an intermitent basis.

When it does fail, the failures manifests as segfault at the end of the test, with the stack dump shown at the end of this commit message. The stack dump points the finger to a routine within the MKLDNN implementation. This is further confirmed by the observation that disabling the MKLDNN based Eigen contraction kernels (for ROCm) seems to make the crash go away.

related JIRA ticket - https://ontrack-internal.amd.com/browse/SWDEV-313684

A previous commit disabled the `//tensorflow/c/eager:c_api_distributed_test` unit-test only in the CPU unit-tests CI job (for the same reason). That comit cannot be reverted, because this commit disables MKLDNN based Eigen contraction kernels *only* for the ROCm build.

```
Thread 191 "c_api_distribut" received signal SIGSEGV, Segmentation fault.
[Switching to thread 191 (Thread 0x7ffc777fe700 (LWP 159004))]
0x00007fff54530000 in ?? ()
(gdb) where
#0  0x00007fff54530000 in ?? ()
#1  0x00007fffd5d15ae4 in dnnl::impl::cpu::x64::avx_gemm_f32::sgemm_nocopy_driver(char const*, char const*, long, long, long, float const*, float const*, long, float const*, long, float const*, float*, long, float const*, float*) ()
   from /root/.cache/bazel/_bazel_root/efb88f6336d9c4a18216fb94287b8d97/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/c/eager/../../../_solib_local/libexternal_Smkl_Udnn_Uv1_Slibmkl_Udnn.so
#2  0x00007fffd5d166e1 in dnnl::impl::cpu::x64::jit_avx_gemm_f32(int, char const*, char const*, long const*, long const*, long const*, float const*, float const*, long const*, float const*, long const*, float const*, float*, long const*, float const*) ()
   from /root/.cache/bazel/_bazel_root/efb88f6336d9c4a18216fb94287b8d97/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/c/eager/../../../_solib_local/libexternal_Smkl_Udnn_Uv1_Slibmkl_Udnn.so
#3  0x00007fffd5e277ed in dnnl_status_t dnnl::impl::cpu::x64::gemm_driver<float, float, float>(char const*, char const*, char const*, long const*, long const*, long const*, float const*, float const*, long const*, float const*, float const*, long const*, float const*, float const*, float*, long const*, float const*, bool, dnnl::impl::cpu::x64::pack_type, dnnl::impl::cpu::x64::gemm_pack_storage_t*, bool) ()
   from /root/.cache/bazel/_bazel_root/efb88f6336d9c4a18216fb94287b8d97/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/c/eager/../../../_solib_local/libexternal_Smkl_Udnn_Uv1_Slibmkl_Udnn.so
#4  0x00007fffd5665056 in dnnl::impl::cpu::extended_sgemm(char const*, char const*, long const*, long const*, long const*, float const*, float const*, long const*, float const*, long const*, float const*, float*, long const*, float const*, bool) ()
   from /root/.cache/bazel/_bazel_root/efb88f6336d9c4a18216fb94287b8d97/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/c/eager/../../../_solib_local/libexternal_Smkl_Udnn_Uv1_Slibmkl_Udnn.so
#5  0x00007fffd52fe983 in dnnl_sgemm ()
   from /root/.cache/bazel/_bazel_root/efb88f6336d9c4a18216fb94287b8d97/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/c/eager/../../../_solib_local/libexternal_Smkl_Udnn_Uv1_Slibmkl_Udnn.so
#6  0x0000555557187b0b in Eigen::internal::TensorContractionKernel<float, float, float, long, Eigen::internal::blas_data_mapper<float, long, 0, 0, 1>, Eigen::internal::TensorContractionInputMapper<float, long, 1, Eigen::TensorEvaluator<Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::ThreadPoolDevice>, Eigen::array<long, 1ul>, Eigen::array<long, 1ul>, 4, true, false, 0, Eigen::MakePointer>, Eigen::internal::TensorContractionInputMapper<float, long, 0, Eigen::TensorEvaluator<Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::ThreadPoolDevice>, Eigen::array<long, 1ul>, Eigen::array<long, 1ul>, 4, true, false, 0, Eigen::MakePointer> >::invoke(Eigen::internal::blas_data_mapper<float, long, 0, 0, 1> const&, Eigen::internal::ColMajorBlock<float, long> const&, Eigen::internal::ColMajorBlock<float, long> const&, long, long, long, float, float) ()
#7  0x000055555718dc76 in Eigen::TensorEvaluator<Eigen::TensorContractionOp<Eigen::array<Eigen::IndexPair<long>, 1ul> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::NoOpOutputKernel const> const, Eigen::ThreadPoolDevice>::EvalParallelContext<Eigen::TensorEvaluator<Eigen::TensorContractionOp<Eigen::array<Eigen::IndexPair<long>, 1ul> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::NoOpOutputKernel const> const, Eigen::ThreadPoolDevice>::NoCallback, true, true, false, 0>::kernel(long, long, long, bool) ()
#8  0x000055555718f327 in Eigen::TensorEvaluator<Eigen::TensorContractionOp<Eigen::array<Eigen::IndexPair<long>, 1ul> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::NoOpOutputKernel const> const, Eigen::ThreadPoolDevice>::EvalParallelContext<Eigen::TensorEvaluator<Eigen::TensorContractionOp<Eigen::array<Eigen::IndexPair<long>, 1ul> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::NoOpOutputKernel const> const, Eigen::ThreadPoolDevice>::NoCallback, true, true, false, 0>::signal_kernel(long, long, long, bool, bool) ()
#9  0x00005555571904cb in Eigen::TensorEvaluator<Eigen::TensorContractionOp<Eigen::array<Eigen::IndexPair<long>, 1ul> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::NoOpOutputKernel const> const, Eigen::ThreadPoolDevice>::EvalParallelContext<Eigen::TensorEvaluator<Eigen::TensorContractionOp<Eigen::array<Eigen::IndexPair<long>, 1ul> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::NoOpOutputKernel const> const, Eigen::ThreadPoolDevice>::NoCallback, true, true, false, 0>::pack_rhs(long, long) ()
#10 0x000055555718fd69 in Eigen::TensorEvaluator<Eigen::TensorContractionOp<Eigen::array<Eigen::IndexPair<long>, 1ul> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::NoOpOutputKernel const> const, Eigen::ThreadPoolDevice>::EvalParallelContext<Eigen::TensorEvaluator<Eigen::TensorContractionOp<Eigen::array<Eigen::IndexPair<long>, 1ul> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::NoOpOutputKernel const> const, Eigen::ThreadPoolDevice>::NoCallback, true, true, false, 0>::enqueue_packing_helper(long, long, long, bool) ()
#11 0x00007ffff6b607a1 in Eigen::ThreadPoolTempl<tensorflow::thread::EigenEnvironment>::WorkerLoop(int) ()
   from /root/.cache/bazel/_bazel_root/efb88f6336d9c4a18216fb94287b8d97/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/c/eager/../../../_solib_local/_U_S_Stensorflow_Sc_Seager_Cc_Uapi_Udistributed_Utest_Ugpu___Utensorflow/libtensorflow_framework.so.2
#12 0x00007ffff6b5de93 in std::_Function_handler<void (), tensorflow::thread::EigenEnvironment::CreateThread(std::function<void ()>)::{lambda()#1}>::_M_invoke(std::_Any_data const&) ()
   from /root/.cache/bazel/_bazel_root/efb88f6336d9c4a18216fb94287b8d97/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/c/eager/../../../_solib_local/_U_S_Stensorflow_Sc_Seager_Cc_Uapi_Udistributed_Utest_Ugpu___Utensorflow/libtensorflow_framework.so.2
#13 0x00007ffff6b40107 in tensorflow::(anonymous namespace)::PThread::ThreadFn(void*) ()
   from /root/.cache/bazel/_bazel_root/efb88f6336d9c4a18216fb94287b8d97/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/c/eager/../../../_solib_local/_U_S_Stensorflow_Sc_Seager_Cc_Uapi_Udistributed_Utest_Ugpu___Utensorflow/libtensorflow_framework.so.2
#14 0x00007fffd1ca86db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#15 0x00007fffd00b471f in clone () from /lib/x86_64-linux-gnu/libc.so.6
```
anandj91 pushed a commit to anandj91/tensorflow that referenced this issue Oct 19, 2022
…ensorflow#11)

* fix the problem that cus as accumulation type does not work in convolution
the scrach memory should be created by cutlass

* turn off logging in train_conv
kpedro88 pushed a commit to kpedro88/tensorflow that referenced this issue Aug 29, 2023
fsx950223 pushed a commit to fsx950223/tensorflow that referenced this issue Nov 28, 2023
On some CI nodes (typically those with higher CPU core counts 128/256), the `//tensorflow/c/eager:c_api_distributed_test_gpu` test fails on an intermitent basis.

When it does fail, the failures manifests as segfault at the end of the test, with the stack dump shown at the end of this commit message. The stack dump points the finger to a routine within the MKLDNN implementation. This is further confirmed by the observation that disabling the MKLDNN based Eigen contraction kernels (for ROCm) seems to make the crash go away.

related JIRA ticket - https://ontrack-internal.amd.com/browse/SWDEV-313684

A previous commit disabled the `//tensorflow/c/eager:c_api_distributed_test` unit-test only in the CPU unit-tests CI job (for the same reason). That comit cannot be reverted, because this commit disables MKLDNN based Eigen contraction kernels *only* for the ROCm build.

```
Thread 191 "c_api_distribut" received signal SIGSEGV, Segmentation fault.
[Switching to thread 191 (Thread 0x7ffc777fe700 (LWP 159004))]
0x00007fff54530000 in ?? ()
(gdb) where
#0  0x00007fff54530000 in ?? ()
#1  0x00007fffd5d15ae4 in dnnl::impl::cpu::x64::avx_gemm_f32::sgemm_nocopy_driver(char const*, char const*, long, long, long, float const*, float const*, long, float const*, long, float const*, float*, long, float const*, float*) ()
   from /root/.cache/bazel/_bazel_root/efb88f6336d9c4a18216fb94287b8d97/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/c/eager/../../../_solib_local/libexternal_Smkl_Udnn_Uv1_Slibmkl_Udnn.so
tensorflow#2  0x00007fffd5d166e1 in dnnl::impl::cpu::x64::jit_avx_gemm_f32(int, char const*, char const*, long const*, long const*, long const*, float const*, float const*, long const*, float const*, long const*, float const*, float*, long const*, float const*) ()
   from /root/.cache/bazel/_bazel_root/efb88f6336d9c4a18216fb94287b8d97/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/c/eager/../../../_solib_local/libexternal_Smkl_Udnn_Uv1_Slibmkl_Udnn.so
tensorflow#3  0x00007fffd5e277ed in dnnl_status_t dnnl::impl::cpu::x64::gemm_driver<float, float, float>(char const*, char const*, char const*, long const*, long const*, long const*, float const*, float const*, long const*, float const*, float const*, long const*, float const*, float const*, float*, long const*, float const*, bool, dnnl::impl::cpu::x64::pack_type, dnnl::impl::cpu::x64::gemm_pack_storage_t*, bool) ()
   from /root/.cache/bazel/_bazel_root/efb88f6336d9c4a18216fb94287b8d97/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/c/eager/../../../_solib_local/libexternal_Smkl_Udnn_Uv1_Slibmkl_Udnn.so
tensorflow#4  0x00007fffd5665056 in dnnl::impl::cpu::extended_sgemm(char const*, char const*, long const*, long const*, long const*, float const*, float const*, long const*, float const*, long const*, float const*, float*, long const*, float const*, bool) ()
   from /root/.cache/bazel/_bazel_root/efb88f6336d9c4a18216fb94287b8d97/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/c/eager/../../../_solib_local/libexternal_Smkl_Udnn_Uv1_Slibmkl_Udnn.so
tensorflow#5  0x00007fffd52fe983 in dnnl_sgemm ()
   from /root/.cache/bazel/_bazel_root/efb88f6336d9c4a18216fb94287b8d97/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/c/eager/../../../_solib_local/libexternal_Smkl_Udnn_Uv1_Slibmkl_Udnn.so
tensorflow#6  0x0000555557187b0b in Eigen::internal::TensorContractionKernel<float, float, float, long, Eigen::internal::blas_data_mapper<float, long, 0, 0, 1>, Eigen::internal::TensorContractionInputMapper<float, long, 1, Eigen::TensorEvaluator<Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::ThreadPoolDevice>, Eigen::array<long, 1ul>, Eigen::array<long, 1ul>, 4, true, false, 0, Eigen::MakePointer>, Eigen::internal::TensorContractionInputMapper<float, long, 0, Eigen::TensorEvaluator<Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::ThreadPoolDevice>, Eigen::array<long, 1ul>, Eigen::array<long, 1ul>, 4, true, false, 0, Eigen::MakePointer> >::invoke(Eigen::internal::blas_data_mapper<float, long, 0, 0, 1> const&, Eigen::internal::ColMajorBlock<float, long> const&, Eigen::internal::ColMajorBlock<float, long> const&, long, long, long, float, float) ()
tensorflow#7  0x000055555718dc76 in Eigen::TensorEvaluator<Eigen::TensorContractionOp<Eigen::array<Eigen::IndexPair<long>, 1ul> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::NoOpOutputKernel const> const, Eigen::ThreadPoolDevice>::EvalParallelContext<Eigen::TensorEvaluator<Eigen::TensorContractionOp<Eigen::array<Eigen::IndexPair<long>, 1ul> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::NoOpOutputKernel const> const, Eigen::ThreadPoolDevice>::NoCallback, true, true, false, 0>::kernel(long, long, long, bool) ()
tensorflow#8  0x000055555718f327 in Eigen::TensorEvaluator<Eigen::TensorContractionOp<Eigen::array<Eigen::IndexPair<long>, 1ul> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::NoOpOutputKernel const> const, Eigen::ThreadPoolDevice>::EvalParallelContext<Eigen::TensorEvaluator<Eigen::TensorContractionOp<Eigen::array<Eigen::IndexPair<long>, 1ul> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::NoOpOutputKernel const> const, Eigen::ThreadPoolDevice>::NoCallback, true, true, false, 0>::signal_kernel(long, long, long, bool, bool) ()
tensorflow#9  0x00005555571904cb in Eigen::TensorEvaluator<Eigen::TensorContractionOp<Eigen::array<Eigen::IndexPair<long>, 1ul> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::NoOpOutputKernel const> const, Eigen::ThreadPoolDevice>::EvalParallelContext<Eigen::TensorEvaluator<Eigen::TensorContractionOp<Eigen::array<Eigen::IndexPair<long>, 1ul> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::NoOpOutputKernel const> const, Eigen::ThreadPoolDevice>::NoCallback, true, true, false, 0>::pack_rhs(long, long) ()
tensorflow#10 0x000055555718fd69 in Eigen::TensorEvaluator<Eigen::TensorContractionOp<Eigen::array<Eigen::IndexPair<long>, 1ul> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::NoOpOutputKernel const> const, Eigen::ThreadPoolDevice>::EvalParallelContext<Eigen::TensorEvaluator<Eigen::TensorContractionOp<Eigen::array<Eigen::IndexPair<long>, 1ul> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::TensorMap<Eigen::Tensor<float const, 2, 1, long>, 16, Eigen::MakePointer> const, Eigen::NoOpOutputKernel const> const, Eigen::ThreadPoolDevice>::NoCallback, true, true, false, 0>::enqueue_packing_helper(long, long, long, bool) ()
tensorflow#11 0x00007ffff6b607a1 in Eigen::ThreadPoolTempl<tensorflow::thread::EigenEnvironment>::WorkerLoop(int) ()
   from /root/.cache/bazel/_bazel_root/efb88f6336d9c4a18216fb94287b8d97/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/c/eager/../../../_solib_local/_U_S_Stensorflow_Sc_Seager_Cc_Uapi_Udistributed_Utest_Ugpu___Utensorflow/libtensorflow_framework.so.2
tensorflow#12 0x00007ffff6b5de93 in std::_Function_handler<void (), tensorflow::thread::EigenEnvironment::CreateThread(std::function<void ()>)::{lambda()#1}>::_M_invoke(std::_Any_data const&) ()
   from /root/.cache/bazel/_bazel_root/efb88f6336d9c4a18216fb94287b8d97/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/c/eager/../../../_solib_local/_U_S_Stensorflow_Sc_Seager_Cc_Uapi_Udistributed_Utest_Ugpu___Utensorflow/libtensorflow_framework.so.2
tensorflow#13 0x00007ffff6b40107 in tensorflow::(anonymous namespace)::PThread::ThreadFn(void*) ()
   from /root/.cache/bazel/_bazel_root/efb88f6336d9c4a18216fb94287b8d97/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/c/eager/../../../_solib_local/_U_S_Stensorflow_Sc_Seager_Cc_Uapi_Udistributed_Utest_Ugpu___Utensorflow/libtensorflow_framework.so.2
tensorflow#14 0x00007fffd1ca86db in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
tensorflow#15 0x00007fffd00b471f in clone () from /lib/x86_64-linux-gnu/libc.so.6
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

16 participants