【DSW Gallery】HybridBackend 极简教程: 在 GPU 上加速推荐模型训练

本文涉及的产品
模型在线服务 PAI-EAS,A10/V100等 500元 1个月
交互式建模 PAI-DSW,每月250计算时 3个月
模型训练 PAI-DLC,100CU*H 3个月
简介: 本文介绍了如何使用 HybridBackend 在 GPU 上加速一个示例推荐模型的训练。HybridBackend 是阿里巴巴提供的一个工业级稀疏模型训练框架,可以帮助用户轻松提升GPU上的稀疏模型训练的计算吞吐。

直接使用

请打开HybridBackend 极简教程: 在 GPU 上加速推荐模型训练,并点击右上角 “ 在DSW中打开” 。

image.png


HybridBackend Quickstart

In this tutorial, we use HybridBackend to speed up training of a sample ranking model based on stacked DCNv2 on Taobao ad click datasets.

Why HybridBackend

  • Training industrial recommendation models can benefit greatly from GPUs
  • Embedding layer becomes wider, consuming up to thousands of feature fields, which requires larger memory bandwidth;
  • Feature interaction layer is going deeper by leveraging multiple DNN submodules over different subsets of features, which requires higher computing capability;
  • GPUs provide much higher computing capability, larger memory bandwidth, and faster data movement;
  • Industrial recommendation models do not take full advantage of the GPU resources by canonical training frameworks
  • Industrial recommendation models contain up to a thousand of input feature fields, introducing fragmentary and memory-intensive operations;
  • The multiple constituent feature interaction submodules introduce substantial small-sized compute kernels;
  • Training framework of industrial recommendation models must be less-invasive and compatible with existing workflow
  • Training is only a part of production recommendation system, it needs great effort to modify inference pipeline;
  • AI scientists write models in a variety of ways, especially in a big team.

HybridBackend enables speeding up of training industrial recommendation models on GPUs with minimum effort. In this tutorial, you will learn how to use HybridBackend to make training of industrial recommendation models much faster.

See HybridBackend GitHub repo and the paper for more information.

Requirements

  • Hardware
  • Modern GPU and interconnect (e.g. A10 / PCIe Gen4)
  • Fast data storage (e.g. ESSD)
  • Software
  • Ubuntu 20.04 or above
  • Python 3.8 or above
  • CUDA 11.4
  • TensorFlow 1.15
  • TFRecord Format
  • Parquet Format
!pip3 install hybridbackend-tf115-cu114

Sample ranking model

In this tutorial, a sample ranking model based on stacked DCNv2 is used. You can see code in ranking for more details.

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf
from tensorflow.python.util import module_wrapper as deprecation
deprecation._PER_MODULE_WARNING_LIMIT = 0
tf.get_logger().propagate = False
from ranking.data import DataSpec
from ranking.model import stacked_dcn_v2
from ranking.model import wide_and_deep_features
# Global configuration
train_max_steps = 100
train_batch_size = 16000
data_spec = DataSpec.read('ranking/taobao/data/spec.json')
def train(iterator, embedding_weight_device, dnn_device, hooks):
  batch = iterator.get_next()
  batch.pop('ts')
  labels = tf.reshape(tf.to_float(batch.pop('label')), shape=[-1, 1])
  wide_features, deep_features = wide_and_deep_features(
    batch,
    data_spec.defaults,
    data_spec.norms,
    data_spec.logs,
    data_spec.embedding_dims,
    data_spec.embedding_sizes,
    embedding_weight_device)
  with tf.device(dnn_device):
    logits = stacked_dcn_v2(
      wide_features + deep_features,
      [1024, 1024, 512, 256, 1])
    loss = tf.reduce_mean(tf.keras.losses.binary_crossentropy(labels, logits))
    step = tf.train.get_or_create_global_step()
    opt = tf.train.AdagradOptimizer(learning_rate=0.001)
    train_op = opt.minimize(loss, global_step=step)
  hooks.append(tf.train.StepCounterHook(10))
  hooks.append(tf.train.StopAtStepHook(train_max_steps))
  config = tf.ConfigProto(allow_soft_placement=True)
  config.gpu_options.allow_growth = True
  config.gpu_options.force_gpu_compatible = True
  with tf.train.MonitoredTrainingSession(
      '', hooks=hooks, config=config) as sess:
    while not sess.should_stop():
      sess.run(train_op)

Training without HybridBackend

Without HybridBackend, training the sample ranking model underutilizes GPUs.

# Download training data in TFRecord format
!wget http://easyrec.oss-cn-beijing.aliyuncs.com/data/taobao/day_0.tfrecord
with tf.Graph().as_default():
  ds = tf.data.TFRecordDataset('./day_0.tfrecord', compression_type='GZIP')
  ds = ds.batch(train_batch_size, drop_remainder=True)
  ds = ds.map(
    lambda batch: tf.io.parse_example(batch, data_spec.to_example_spec()))
  ds = ds.prefetch(2)
  iterator = tf.data.make_one_shot_iterator(ds)
  with tf.device('/gpu:0'):
    train(iterator, '/cpu:0', '/gpu:0', [])

Training with HybridBackend

By just one-line importing, HybridBackend uses packing and interleaving to speed up embedding layers dramatically and automatically.

# Note: Once HybridBackend is on, you need to restart notebook to turn it off.
import hybridbackend.tensorflow as hb
# Exact same code except HybridBackend is on.
with tf.Graph().as_default():
  ds = tf.data.TFRecordDataset('./day_0.tfrecord', compression_type='GZIP')
  ds = ds.batch(train_batch_size, drop_remainder=True)
  ds = ds.map(
    lambda batch: tf.io.parse_example(batch, data_spec.to_example_spec()))
  ds = ds.prefetch(2)
  iterator = tf.data.make_one_shot_iterator(ds)
  with tf.device('/gpu:0'):
    train(iterator, '/cpu:0', '/gpu:0', [])

Training with HybridBackend (Optimized data pipeline)

Even greater training performance gains can be archived if we use optimized data pipeline provided by HybridBackend.

# Download training data in Parquet format
!wget http://easyrec.oss-cn-beijing.aliyuncs.com/data/taobao/day_0.parquet
# Note: Once HybridBackend is on, you need to restart notebook to turn it off.
import hybridbackend.tensorflow as hb
with tf.Graph().as_default():
  ds = hb.data.ParquetDataset(
    './day_0.parquet',
    batch_size=train_batch_size,
    num_parallel_parser_calls=tf.data.experimental.AUTOTUNE,
    drop_remainder=True)
  ds = ds.apply(hb.data.to_sparse())
  ds = ds.prefetch(2)
  iterator = tf.data.make_one_shot_iterator(ds)
  with tf.device('/gpu:0'):
    iterator = hb.data.Iterator(iterator, 2)
    train(iterator, '/cpu:0', '/gpu:0', [hb.data.Iterator.Hook()])


相关实践学习
部署Stable Diffusion玩转AI绘画(GPU云服务器)
本实验通过在ECS上从零开始部署Stable Diffusion来进行AI绘画创作,开启AIGC盲盒。
相关文章
|
2月前
|
并行计算 Shell TensorFlow
Tensorflow-GPU训练MTCNN出现错误-Could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED
在使用TensorFlow-GPU训练MTCNN时,如果遇到“Could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED”错误,通常是由于TensorFlow、CUDA和cuDNN版本不兼容或显存分配问题导致的,可以通过安装匹配的版本或在代码中设置动态显存分配来解决。
60 1
Tensorflow-GPU训练MTCNN出现错误-Could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED
|
2月前
|
人工智能 语音技术 UED
仅用4块GPU、不到3天训练出开源版GPT-4o,这是国内团队最新研究
【10月更文挑战第19天】中国科学院计算技术研究所提出了一种名为LLaMA-Omni的新型模型架构,实现与大型语言模型(LLMs)的低延迟、高质量语音交互。该模型集成了预训练的语音编码器、语音适配器、LLM和流式语音解码器,能够在不进行语音转录的情况下直接生成文本和语音响应,显著提升了用户体验。实验结果显示,LLaMA-Omni的响应延迟低至226ms,具有创新性和实用性。
72 1
|
3月前
|
机器学习/深度学习 算法 数据挖掘
从菜鸟到大师:Scikit-learn库实战教程,模型训练、评估、选择一网打尽!
【9月更文挑战第13天】在数据科学与机器学习领域,Scikit-learn是不可或缺的工具。本文通过问答形式,指导初学者从零开始使用Scikit-learn进行模型训练、评估与选择。首先介绍了如何安装库、预处理数据并训练模型;接着展示了如何利用多种评估指标确保模型性能;最后通过GridSearchCV演示了系统化的参数调优方法。通过这些实战技巧,帮助读者逐步成长为熟练的数据科学家。
140 3
|
4月前
|
机器学习/深度学习 并行计算 PyTorch
GPU 加速与 PyTorch:最大化硬件性能提升训练速度
【8月更文第29天】GPU(图形处理单元)因其并行计算能力而成为深度学习领域的重要组成部分。本文将介绍如何利用PyTorch来高效地利用GPU进行深度学习模型的训练,从而最大化训练速度。我们将讨论如何配置环境、选择合适的硬件、编写高效的代码以及利用高级特性来提高性能。
872 1
|
4月前
|
机器学习/深度学习 数据采集 人工智能
【机器学习】klearn基础教程
scikit-learn(通常缩写为sklearn)是一个用于Python编程语言的强大机器学习库。它提供了各种分类、回归、聚类算法,以及数据预处理、降维和模型评估的工具。以下是sklearn的基础教程,帮助你开始使用它
30 3
|
4月前
|
并行计算 算法 调度
自研分布式训练框架EPL问题之提高GPU利用率如何解决
自研分布式训练框架EPL问题之提高GPU利用率如何解决
|
6月前
|
机器学习/深度学习 并行计算 算法框架/工具
为什么深度学习模型在GPU上运行更快?
为什么深度学习模型在GPU上运行更快?
84 2
|
6月前
|
机器学习/深度学习 数据采集 自然语言处理
机器学习之sklearn基础教程
机器学习之sklearn基础教程
|
6月前
|
机器学习/深度学习 并行计算 PyTorch
【从零开始学习深度学习】20. Pytorch中如何让参数与模型在GPU上进行计算
【从零开始学习深度学习】20. Pytorch中如何让参数与模型在GPU上进行计算
|
6月前
|
机器学习/深度学习 人工智能 自然语言处理
人工智能平台PAI产品使用合集之进入DSW后,如何把工作环境切换为GPU状态
阿里云人工智能平台PAI是一个功能强大、易于使用的AI开发平台,旨在降低AI开发门槛,加速创新,助力企业和开发者高效构建、部署和管理人工智能应用。其中包含了一系列相互协同的产品与服务,共同构成一个完整的人工智能开发与应用生态系统。以下是对PAI产品使用合集的概述,涵盖数据处理、模型开发、训练加速、模型部署及管理等多个环节。

相关产品

  • 人工智能平台 PAI