Lately I had to setup an old Debian 10 server to run TensorFlow. TensorFlow requires CuDNN but the installation of this library in Debian 10 is a bit complicated and undocumented. In this post, I explain what I did and what worked for me.

The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, attention, matmul, pooling, and normalization.

This is the official Nvidia documentation to install cuDNN on linux. There are multiple methods to install cuDNN, the easiest and most straightforward is to use a package manager. Unfortunately, there are some cases in which this is not possible. For example, for Debian 10(buster) the CUDA repository does not have the cuDNN package.

In this cases you can install cuDNN using Python or using a tarball file. For some reason the Python method didn’t work for me, so I had to go for the tarball file. The instructions to install via tarball in the current documentation are almost non existent.

Instructions

Nvidia provides a Redist Archive here where the tarball files are located. There, you will see two folders cuddn and cudnn_samples, the former is the actual library and the later are some samples to test that it’s working properly. Inside of cudnn you can find the different architectures supported, most likely yours will be linux-x86_64. There, you will find different .tar.xz files corresponding to the different versions of cuDNN and CUDA. Be sure to choose a version that suits your system requirements.

Note on TensorFlow: For my case, I first selected the latest version, which at this time is 9.1.0.70. But as it turns out and as for the time that I’m writing this, the latest TensorFlow version (2.16) officially supports cuDNN 8.9 and CUDA 12.3(look at this and this for reference). I didn’t know that so I initially installed cuDNN 9.1.0.70 which was not compatible with TensorFlow. Afterwards, I went for cuDNN 8.9.7.29, which did work.

Now, download the tar file, for instance using wget.

wget https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-x86_64/cudnn-linux-x86_64-8.9.7.29_cuda12-archive.tar.xz

Then unzip it:

tar -Jxvf cudnn-linux-x86_64-8.9.7.29_cuda12-archive.tar.xz

and you will get a folder like this:

cudnn-linux-x86_64-8.9.7.29_cuda12-archive
|-- include
|-- lib
`-- LICENSE

We now need to copy the libraries in lib to /usr/local/cuda/lib64/. But first, let’s look at the libraries in lib:

|-- libcudnn_adv_infer.so -> libcudnn_adv_infer.so.8
|-- libcudnn_adv_infer.so.8 -> libcudnn_adv_infer.so.8.9.7
|-- libcudnn_adv_infer.so.8.9.7
|-- libcudnn_adv_infer_static.a
|-- libcudnn_adv_infer_static_v8.a -> libcudnn_adv_infer_static.a
|-- libcudnn_adv_train.so -> libcudnn_adv_train.so.8
|-- libcudnn_adv_train.so.8 -> libcudnn_adv_train.so.8.9.7
|-- libcudnn_adv_train.so.8.9.7
|-- libcudnn_adv_train_static.a
|-- libcudnn_adv_train_static_v8.a -> libcudnn_adv_train_static.a
|-- libcudnn_cnn_infer.so -> libcudnn_cnn_infer.so.8
|-- libcudnn_cnn_infer.so.8 -> libcudnn_cnn_infer.so.8.9.7
|-- libcudnn_cnn_infer.so.8.9.7
|-- libcudnn_cnn_infer_static.a
|-- libcudnn_cnn_infer_static_v8.a -> libcudnn_cnn_infer_static.a
|-- libcudnn_cnn_train.so -> libcudnn_cnn_train.so.8
|-- libcudnn_cnn_train.so.8 -> libcudnn_cnn_train.so.8.9.7
|-- libcudnn_cnn_train.so.8.9.7
|-- libcudnn_cnn_train_static.a
|-- libcudnn_cnn_train_static_v8.a -> libcudnn_cnn_train_static.a
|-- libcudnn_ops_infer.so -> libcudnn_ops_infer.so.8
|-- libcudnn_ops_infer.so.8 -> libcudnn_ops_infer.so.8.9.7
|-- libcudnn_ops_infer.so.8.9.7
|-- libcudnn_ops_infer_static.a
|-- libcudnn_ops_infer_static_v8.a -> libcudnn_ops_infer_static.a
|-- libcudnn_ops_train.so -> libcudnn_ops_train.so.8
|-- libcudnn_ops_train.so.8 -> libcudnn_ops_train.so.8.9.7
|-- libcudnn_ops_train.so.8.9.7
|-- libcudnn_ops_train_static.a
|-- libcudnn_ops_train_static_v8.a -> libcudnn_ops_train_static.a
|-- libcudnn.so -> libcudnn.so.8
|-- libcudnn.so.8 -> libcudnn.so.8.9.7
`-- libcudnn.so.8.9.7

You can notice that they have multiple symbolic links, it is important to preserve this symlinks when moving the libraries. To do this include the -Pflag in the copy command:

sudo cp -P lib/* /usr/local/cuda/lib64/

We now copy the headers in include to /usr/local/cuda/include/

sudo cp include/* /usr/local/cuda/include/

Now set this permissions and that should be all:

sudo chmod a+r /usr/local/cuda/include/cudnn*.h
sudo chmod a+r /usr/local/cuda/lib64/libcudnn*

Run cuDNN samples

To make sure that the installation worked, you can run the some samples provided by Nvidia. To do this first download and unzip the samples:

wget https://developer.download.nvidia.com/compute/cudnn/redist/cudnn_samples/linux-x86_64/cudnn_samples-linux-x86_64-8.9.7.29_cuda12-archive.tar.xz
tar -Jxvf cudnn_samples-linux-x86_64-8.9.7.29_cuda12-archive.tar.xz
cd cudnn_samples-linux-x86_64-8.9.7.29_cuda12-archive/src/cudnn_samples_v8

For example, to run the MNIST sample, do this:

cd mnistCUDNN
make
./mnistCUDNN

You should get a bunch of output with the last line saying Test passed!.

Note: If you get ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory, you need to set this environment variable(ref):

export=LD_LIBRARY_PATH=/usr/local/cuda/lib64

Note: export is temporary. To make it persistent(for your user) you can write it in your .bashrc file.