1.CPU
本说明基于DeepFace的Docker镜像文件deepface_image.tar
进行说明。
# 1.导入镜像 docker load -i deepface_image.tar # 2.创建模型文件夹【并将下载好的模型文件上传】 mkdir -p /root/.deepface/weights/ # 3.启动容器 # 网络隔离性受影响但性能好 docker run --name deepface --privileged=true --restart=always --net="host" -v /root/.deepface/weights/:/root/.deepface/weights/ -d deepface_image # 一般使用 docker run --name deepface --privileged=true --restart=always -p 5000:5000 -v /root/.deepface/weights/:/root/.deepface/weights/ -d deepface_image # 使用最新的代码进行容器启动 docker run --name deepface_src --privileged=true --restart=always --net="host" \ -v /root/.deepface/weights/:/root/.deepface/weights/ \ -v /opt/test-facesearch/deepfacesrc/:/app/deepface/ \ -d deepface_image
警告信息:
# 执行命令 docker run --name deepface --privileged=true --restart=always --net="host" -p 5000:5000 -v /root/.deepface/weights/:/root/.deepface/weights/ -d deepface_image # 警告 WARNING: Published ports are discarded when using host network mode
这个警告通常出现在使用Docker的host网络模式时,因为在这种模式下,容器与主机共享相同的网络命名空间,因此容器中的端口将直接映射到主机上,而不需要进行端口转发。因此,使用-p选项来发布容器端口是无效的,并且会导致警告信息。要解决这个问题,您可以尝试以下方法:
- 如果您不需要将容器端口映射到主机上,请删除-p选项。
- 如果您需要将容器端口映射到主机上,请使用Docker的其他网络模式(例如bridge模式)。
- 如果您确实需要使用host网络模式,请考虑使用主机IP地址来访问容器中的服务,而不是使用端口转发。
2.GPU
首先要启动容器安装tensorrt
:
pip install tensorrt -i https://pypi.tuna.tsinghua.edu.cn/simple
安装后的启动命令:
docker run --name deepface --privileged=true --restart=always --net="host" \ -e PATH=/usr/local/cuda-11.2/bin:$PATH -e LD_LIBRARY_PATH=/usr/local/cuda-11.2/lib64:$LD_LIBRARY_PATH \ -v /root/.deepface/weights/:/root/.deepface/weights/ \ -v /usr/local/cuda-11.2/:/usr/local/cuda-11.2/ \ -v /opt/xinan-facesearch-service-public/deepface/api/app.py:/app/app.py \ -d deepface_image
测试fastmtcnn
将最新代码挂载到目录下:
docker run --name deepface_gpu_src --privileged=true --restart=always --net="host" \ -e PATH=/usr/local/cuda-11.2/bin:$PATH -e LD_LIBRARY_PATH=/usr/local/cuda-11.2/lib64:$LD_LIBRARY_PATH \ -v /root/.deepface/weights/:/root/.deepface/weights/ \ -v /usr/local/cuda-11.2/:/usr/local/cuda-11.2/ \ -v /opt/test-facesearch/deepfacesrc/:/app/deepface/ \ -v /opt/xinan-facesearch-service-public/deepface/api/app.py:/app/app.py \ -d deepface_image
跟CPU部署不同点:
- 设置了两个环境变量
-e PATH=/usr/local/cuda-11.2/bin:$PATH -e LD_LIBRARY_PATH=/usr/local/cuda-11.2/lib64:$LD_LIBRARY_PATH
- 添加了一个挂载目录
-v /usr/local/cuda-11.2/:/usr/local/cuda-11.2/
- 添加了一个挂载文件
-v /deepface/api/app.py:/app/app.py
文件/deepface/api/app.py
内容如下:
import tensorrt as tr import tensorflow as tf from flask import Flask from routes import blueprint def create_app(): available = tf.config.list_physical_devices('GPU') print(f"available:{available}") app = Flask(__name__) app.register_blueprint(blueprint) return app
调用tensorflow
前需要先引入tensorrt
。
3.cuDNN安装
官网安装文档:https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html
cuDNN的支持矩阵:https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html
The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, attention, matmul, pooling, and normalization.
安装环境:
[root@localhost ~]# cat /etc/centos-release CentOS Linux release 7.7.1908 (Core)
3.1 Prerequisites
需要先安装1.GPU Driver
和2.CUDAToolkit
nvidia-smi # 查询结果 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.27.04 Driver Version: 460.27.04 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+
和3.zlib
yum list installed | grep zlib # 查询结果 zlib.x86_64 1.2.7-18.el7 @anaconda zlib-devel.x86_64 1.2.7-18.el7 @base
3.2 下载Linux版本cuDNN
下载cuDNN需要先注册NVIDIA开发者计划:https://developer.nvidia.com/developer-program,下载页面:https://developer.nvidia.com/cudnn,选择平台和对应的版本进行下载,本次下载的为cudnn-11.2-linux-x64-v8.1.1.33.tgz
大小为1.2G
。浏览器下载容易失败,可复制浏览器的下载链接在Linux服务器上进行下载【腾讯云服务器速度12MB/s】:
wget https://developer.download.nvidia.cn/compute/machine-learning/cudnn/secure/8.1.1.33/11.2_20210301/cudnn-11.2-linux-x64-v8.1.1.33.tgz?G2wTHq8E--2jJ9iEfgtFbqfMGX0I1XD6BIksPkVIiU9F3ttrupv_oYvURaZX1dV71EIqEI767WbG5svvSMBElcaVrqZl15UEOUORNWbYwKZDyxidGmwHmG44XiEo6yyM1Rt7ct6NGlVXnxx0etcI9pNJ1PiaHYddY86Lc_yaBLdJwy9hqku4TW6NSNr7XfuCYXvGOPvOmraR4EOfg6Q=&t=eyJscyI6IndlYnNpdGUiLCJsc2QiOiJkZXZlbG9wZXIubnZpZGlhLmNvbS9jdWRhLTEwLjItZG93bmxvYWQtYXJjaGl2ZT90YXJnZXRfb3M9TGludXgifQ==
3.3 安装
The following steps describe how to build a cuDNN dependent program. Choose the installation method that meets your environment needs. For example, the tar file installation applies to all Linux platforms. The Debian package installation applies to Debian 11, Ubuntu 18.04, Ubuntu 20.04, and 22.04. The RPM package installation applies to RHEL7, RHEL8, and RHEL9. In the following sections:
- your CUDA directory path is referred to as /usr/local/cuda/
- your cuDNN download path is referred to as
可根据不同平台选择适合的安装方法,tar文件适合所有的Linux平台,安装步骤为:
- 解压安装包
tar -xvf cudnn-linux-$arch-8.x.x.x_cudaX.Y-archive.tar.xz
- Copy the following files into the CUDA toolkit directory
$ sudo cp cudnn-*-archive/include/cudnn*.h /usr/local/cuda/include $ sudo cp -P cudnn-*-archive/lib/libcudnn* /usr/local/cuda/lib64 $ sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*
安装文件为cudnn-11.2-linux-x64-v8.1.1.33.tgz
实际操作步骤为:
# 1.解压 tar -zxvf cudnn-11.2-linux-x64-v8.1.1.33.tgz # 2.复制并赋权 # 解压后的文件夹名称为cuda # inculde【18个文件】 cp ./cuda/include/cudnn*.h /usr/local/cuda/include # lib64【8个文件 15个软连接】-P 选项表示保留源文件或目录的属性 cp -P ./cuda/lib64/libcudnn* /usr/local/cuda/lib64 # 所有用户赋可读权限 chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*
另一个版本的安装文件为cudnn-linux-x86_64-8.6.0.163_cuda11-archive.tar.xz
步骤为:
# 1.解压 tar -xvf cudnn-linux-x86_64-8.6.0.163_cuda11-archive.tar.xz # 2.复制并赋权 inculde【18个文件】 lib【13个文件 20个软连接】 cp ./cudnn-linux-x86_64-8.6.0.163_cuda11-archive/include/cudnn*.h /usr/local/cuda/include cp -P ./cudnn-linux-x86_64-8.6.0.163_cuda11-archive/lib/libcudnn* /usr/local/cuda/lib64 chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*