2017-01-30 2 views
0

Я устанавливаю Tensorflow, для которого я должен установить tenorflow в ubuntu. Я выполнил команду ./configure в корневом каталоге tf. Это выход:Ошибка при установке тензорного потока, служащего в ubuntu

Please specify the location of python. [Default is /usr/bin/python]: 
Please specify optimization flags to use during compilation [Default is -march=native]:   
Do you wish to use jemalloc as the malloc implementation? [Y/n] y 
jemalloc enabled 
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] y 
Google Cloud Platform support will be enabled for TensorFlow 
Do you wish to build TensorFlow with Hadoop File System support? [y/N] y 
Hadoop File System support will be enabled for TensorFlow 
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] y 
XLA JIT support will be enabled for TensorFlow 
Found possible Python library paths: 
    /usr/local/lib/python2.7/dist-packages 
    /usr/lib/python2.7/dist-packages 
Please input the desired Python library path to use. Default is [/usr/local/lib/python2.7/dist-packages] 

Using python library path: /usr/local/lib/python2.7/dist-packages 
Do you wish to build TensorFlow with OpenCL support? [y/N] y 
OpenCL support will be enabled for TensorFlow 
Do you wish to build TensorFlow with CUDA support? [y/N] y 
CUDA support will be enabled for TensorFlow 
Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: 
Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to use system default]: 
Please specify the location where CUDA toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 
Please specify the Cudnn version you want to use. [Leave empty to use system default]: 
Please specify the location where cuDNN library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: 
Please specify a list of comma-separated Cuda compute capabilities you want to build with. 
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. 
Please note that each additional compute capability significantly increases your build time and binary size. 
[Default is: "3.5,5.2"]: 
Please specify which C++ compiler should be used as the host C++ compiler. [Default is ]: 
Invalid C++ compiler path. cannot be found 
Please specify which C++ compiler should be used as the host C++ compiler. [Default is ]: /usr/bin/g++ 
Please specify which C compiler should be used as the host C compiler. [Default is ]: /usr/bin/gcc 
Please specify the location where ComputeCpp for SYCL 1.2 is installed. [Default is /usr/local/computecpp]: 
................................................................. 
INFO: Starting clean (this may take a while). Consider using --expunge_async if the clean takes more than several minutes. 
......... 
ERROR: package contains errors: tensorflow/stream_executor. 
ERROR: error loading package 'tensorflow/stream_executor': Encountered error while reading extension file 'cuda/build_defs.bzl': no such package '@local_config_cuda//cuda': Traceback (most recent call last): 
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 813 
     _create_cuda_repository(repository_ctx) 
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 727, in _create_cuda_repository 
     _get_cuda_config(repository_ctx) 
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 584, in _get_cuda_config 
     _cudnn_version(repository_ctx, cudnn_install_base..., ...) 
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 295, in _cudnn_version 
     _find_cuda_define(repository_ctx, cudnn_install_base..., ...) 
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 270, in _find_cuda_define 
     auto_configure_fail("Cannot find cudnn.h at %s" % st...)) 
    File "/home/cortana/Libraries/serving/tensorflow/third_party/gpus/cuda_configure.bzl", line 93, in auto_configure_fail 
     fail(" 
%sAuto-Configuration Error:%s ...)) 

Auto-Configuration Error: Cannot find cudnn.h at /usr/lib/x86_64-linux-gnu/include/cudnn.h 
. 

Там нет папки под названием /usr/lib/x86_64-linux-gnu/include. У меня есть libcudnn.so файл в /usr/lib/x86_64-linux-gnu/ и cudnn.h в папке /usr/include. Я не знаю, как файл конфигурации генерирует пути, но он не может найти cudnn, хотя я успешно установил caffe, CMakeLists.txt может легко найти пути к установке cuda и cudnn. Как исправить эту проблему?

+0

Это звучит как проблема Github https://github.com/tensorflow/tensorflow/issues/6850. Можете ли вы попробовать снова на голове Tensorflow и посмотреть, исправлена ​​ли проблема? Если нет, проследите за этой проблемой github. –

+0

у вас есть NVIDIA GPU в вашей системе. Если да, что вы получаете, когда вводите nvidia-smi и nvcc -V ?? –

ответ

0

Предполагая, что у вас действительно есть cudnn.
найти местоположение вашего CUDA установки с помощью -
which nvcc

в моем случае она возвращает - /usr/local/cuda-6.5/bin/nvcc

так cudnn.h находится в /usr/local/cuda-6.5/include (если cudnn установлен)

при настройке tensorflow, вы спрашиваете -
Please specify the location where cuDNN library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:

здесь вы должны явно указать местоположение cudnn.
В моем случае это /usr/local/cuda-6.5/include/.

Смежные вопросы