1.适配说明
本样例为开源Generative Recommendations模型, 将其迁移至NPU侧训练,并使用
NPU的HSTU融合算子来实现性能的优化。
模型参考的开源链接为https://github.com/facebookresearch/generative%02recommenders
克隆源码并固定版本为:Commits on Dec 16, 2024,提交的SHA-1 hash值(提交ID):bb389f9539b054e7268528efcd35457a6ad52439
验证运行的算力平台:Atlas 800T A2
软件 | 版本头 |
---|---|
Pytorch | 2.1.0 |
Python | 3.11.0 |
Fbgemm | 0.5.0 |
gcc | 10.2.0 |
CANN | 8.0.0.alpha001 |
driver | 24.1.rc2.2 |
2.启动容器
镜像下载地址:
https://www.hiascend.com/developer/ascendhub/detail/9faeb4847b3e419f81b78a4d0ed574b5
该镜像中部分配套版本说明:
启动容器命令参考:
docker run \
-u root \
-it \
--name ${container_name} \
--net=host \
--shm-size="300g" \
--privileged \
-v /etc/localtime:/etc/localtime \
-e ASCEND_VISIBLE_DEVICES=0-7 \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /home:/home \
-v /root/ascend:/root/ascend \
-v /root/.ssh:/root/.ssh \
-v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
${image_name} \
/bin/bash
3.安装依赖
下载最新版本的mindxsdk-mxec-add-ons安装包
https://clouddrive.huawei.com/hwshare/f3ea4909559eae5305e42c02b5c3f06c?type=email&fileId=135327&ownerId=2088277&fileSize=22379519&fileName=QXNjZW5kLW1pbmR4c2RrLW14cmVjLWFkZC1vbnMtcG9jLWxpbnV4LXg4Nl82NC50YXIuZ3o=&isFolder=false
解压之后,进入文件夹mindxsdk-mxec-add-ons-poc:
cd mindxsdk-mxec-add-on-poc
cd torch_plugin
# 执行命令安装:
pip3 install torch_npu-2.1.0.post9-cp311-cp311-linux_x86_64.whl
4.安装适配算子
重新进入文件夹 mindxsdk-mxec-add-ons, 安装需要的昇腾适配算子:
jagged_to_padded_dense、IndexSelect优化、 dense_to_jagged、asynchronous_complete_cumsum、gather_for_rank1
cd mindxsdk-mxec-add-ons-poc/mxrec_ops
bash mxrec_opp_asynchronous_complete_cumsum.run
bash mxrec_opp_dense_to_jagged.run
bash mxrec_opp_index_select_for_rank1_backward.run
bash mxrec_opp_jagged_to_padded_dense.run
bash mxrec_opp_gather_for_rank1.run
bash mxrec_opp_hstu_dense_forward.run
bash mxrec_opp_hstu_dense_backward.run
5.编译融合算子依赖的lib
进入 torch_library 文件夹:
cd mindxsdk-mxec-add-ons-poc/torch_library
cd 2.1.0/hstu
bash build_ops.sh
如果在编译过程中有gcc版本的问题:
重新设置gcc路径:
which gcc
which g++
unset CC CXX
CC=/usr/local/gcc10.2.0/bin/gcc
CXX=/usr/local/gcc10.2.0/bin/g++ bash build_ops.sh
执行完以上命令之后,融合算子的依赖包libhstu_dense_ops.so会生成在同目录下的
build文件夹下,可将该so包拷贝到某固定目录下。示例如下:
cp ./build/libhstu_dense_ops.so /home/torch_ops/
6.下载源码
git clone https://gitee.com/ascend/RecSDK.git
cd RecSDK
git checkout branch_v7.0.0-POC_torch
cd ./torch/examples/generative_recommenders/npu
git clone https://github.com/facebookresearch/generative%02recommenders.git
下载开源的gr模型,以及昇腾的RecSDK
模型修改:
将 Generative Recommendations 模型迁移到NPU上并适配NPU融合HSTU算子,代码
修改部分已经编写在 NPU_GR.patch 中,载入命令如下:
cd generative-recommenders
cp ../NPU_GR.patch ./
git checkout bb389f9539b054e7268528efcd35457a6ad52439
git apply NPU_GR.patch
数据集下载
python preprocess_public_data.py
代码修改:
vim ./generative_recommenders/trainer/train.py
注释掉from msprobe.pytorch import seed_all
msprobe是精度模式下对比,与GPU的loss进行对比(Ascend内部插件)
msprobe路径:
https://gitee.com/ascend/mstt/tree/master/debug/accuracy_tools/msprobe
config文件:
创建一个hstu-mt-3400.gin文件,文件内容如下。将该gin文件放置在 generative_recommenders/configs/ml-1m/ 目录下
train_fn.dataset_name = "ml-1m"
train_fn.max_sequence_length = 3389
train_fn.local_batch_size = 32
train_fn.main_module = "HSTU"
train_fn.dropout_rate = 0.2
train_fn.user_embedding_norm = "l2_norm"
train_fn.num_epochs = 1
train_fn.item_embedding_dim = 512
hstu_encoder.num_blocks = 3
hstu_encoder.num_heads = 2
hstu_encoder.dqk = 256
hstu_encoder.dv = 256
hstu_encoder.linear_dropout_rate = 0.2
train_fn.learning_rate = 1e-3
train_fn.weight_decay = 0
train_fn.num_warmup_steps = 0
train_fn.interaction_module_type = "DotProduct"
train_fn.top_k_method = "MIPSBruteForceTopK"
train_fn.loss_module = "SampledSoftmaxLoss"
train_fn.num_negatives = 128
train_fn.eval_interval = 50
train_fn.sampling_strategy = "local"
train_fn.temperature = 0.05
train_fn.item_l2_norm = True
train_fn.l2_norm_eps = 1e-6
train_fn.enable_tf32 = True
train_fn.precision_mode = False
create_data_loader.prefetch_factor = 128
create_data_loader.num_workers = 8
hstu_encoder.dqk和hstu_encoder.dv参数配置必须是16的整数倍。注意力机制的query,key,value的维度信息)
运行命令:
修改run.sh 脚本,
使用hstu-mt-3400.gin配置文件:
当使用hstu融合算子时,hstu_encoder.dqk,hstu_encoder.dv必须为16的整数倍
export USE_NPU_HSTU=1 # 使用融合算子
export ENABLE_RAB=0
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
ASCEND_RT_VISIBLE_DEVICES=0 python3 main.py --gin_config_file=configs/ml-1m/hstu-mt-3400.gin --master_port=12345 | tee temp.log
使用hstu-sampled-softmax-n128-large-final.gin,配置文件:
不使用融合算子
export USE_NPU_HSTU=0 # 不使用融合算子
export ENABLE_RAB=0
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
ASCEND_RT_VISIBLE_DEVICES=0 python3 main.py --gin_config_file=configs/ml-1m/hstu-sampled-softmax-n128-final.gin --master_port=12345 | tee temp.log
执行命令:
bash run.sh
7.性能测试结果
数据集 | 配置文件 | seq_len | num_block | num_head | dqk,dv | 端到端耗时(单step) | 是否使用HSTU融合算子 |
---|---|---|---|---|---|---|---|
ml_1m | hstu-mt_3400.gin | 3400 | 3 | 2 | 256 | 47.6ms | True |
ml_1m | hstu-mt_3400.gin | 3400 | 3 | 2 | 256 | 346ms | False |
ml_1m | hstu_softmax_n128-final | 200 | 2 | 1 | 50 | 48.9ms | False |
ml_1m | hstu_softmax_n128-large_final | 200 | 8 | 2 | 25 | 66ms | False |