使用Accelerate库在多GPU上进行LLM推理

本文涉及的产品
实时计算 Flink 版,5000CU*H 3个月
检索分析服务 Elasticsearch 版,2核4GB开发者规格 1个月
大数据开发治理平台 DataWorks,不限时长
简介: 大型语言模型(llm)已经彻底改变了自然语言处理领域。随着这些模型在规模和复杂性上的增长,推理的计算需求也显著增加。为了应对这一挑战利用多个gpu变得至关重要。

所以本文将在多个gpu上并行执行推理,主要包括:Accelerate库介绍,简单的方法与工作代码示例和使用多个gpu的性能基准测试。

本文将使用多个3090将llama2-7b的推理扩展在多个GPU上

基本示例

我们首先介绍一个简单的示例来演示使用Accelerate进行多gpu“消息传递”。

 from accelerate import Accelerator
 from accelerate.utils import gather_object

 accelerator = Accelerator()

 # each GPU creates a string
 message=[ f"Hello this is GPU {accelerator.process_index}" ] 

 # collect the messages from all GPUs
 messages=gather_object(message)

 # output the messages only on the main process with accelerator.print() 
 accelerator.print(messages)

输出如下:

 ['Hello this is GPU 0', 
   'Hello this is GPU 1', 
   'Hello this is GPU 2', 
   'Hello this is GPU 3', 
   'Hello this is GPU 4']

多GPU推理

下面是一个简单的、非批处理的推理方法。代码很简单,因为Accelerate库已经帮我们做了很多工作,我们直接使用就可以:

 from accelerate import Accelerator
 from accelerate.utils import gather_object
 from transformers import AutoModelForCausalLM, AutoTokenizer
 from statistics import mean
 import torch, time, json

 accelerator = Accelerator()

 # 10*10 Prompts. Source: https://www.penguin.co.uk/articles/2022/04/best-first-lines-in-books
 prompts_all=[
     "The King is dead. Long live the Queen.",
     "Once there were four children whose names were Peter, Susan, Edmund, and Lucy.",
     "The story so far: in the beginning, the universe was created.",
     "It was a bright cold day in April, and the clocks were striking thirteen.",
     "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.",
     "The sweat wis lashing oafay Sick Boy; he wis trembling.",
     "124 was spiteful. Full of Baby's venom.",
     "As Gregor Samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a gigantic insect.",
     "I write this sitting in the kitchen sink.",
     "We were somewhere around Barstow on the edge of the desert when the drugs began to take hold.",
 ] * 10

 # load a base model and tokenizer
 model_path="models/llama2-7b"
 model = AutoModelForCausalLM.from_pretrained(
     model_path,    
     device_map={"": accelerator.process_index},
     torch_dtype=torch.bfloat16,
 )
 tokenizer = AutoTokenizer.from_pretrained(model_path)   

 # sync GPUs and start the timer
 accelerator.wait_for_everyone()
 start=time.time()

 # divide the prompt list onto the available GPUs 
 with accelerator.split_between_processes(prompts_all) as prompts:
     # store output of generations in dict
     results=dict(outputs=[], num_tokens=0)

     # have each GPU do inference, prompt by prompt
     for prompt in prompts:
         prompt_tokenized=tokenizer(prompt, return_tensors="pt").to("cuda")
         output_tokenized = model.generate(**prompt_tokenized, max_new_tokens=100)[0]

         # remove prompt from output 
         output_tokenized=output_tokenized[len(prompt_tokenized["input_ids"][0]):]

         # store outputs and number of tokens in result{}
         results["outputs"].append( tokenizer.decode(output_tokenized) )
         results["num_tokens"] += len(output_tokenized)

     results=[ results ] # transform to list, otherwise gather_object() will not collect correctly

 # collect results from all the GPUs
 results_gathered=gather_object(results)

 if accelerator.is_main_process:
     timediff=time.time()-start
     num_tokens=sum([r["num_tokens"] for r in results_gathered ])

     print(f"tokens/sec: {num_tokens//timediff}, time {timediff}, total tokens {num_tokens}, total prompts {len(prompts_all)}")

使用多个gpu会导致一些通信开销:性能在4个gpu时呈线性增长,然后在这种特定设置中趋于稳定。当然这里的性能取决于许多参数,如模型大小和量化、提示长度、生成的令牌数量和采样策略,所以我们只讨论一般的情况

1 GPU: 44个token /秒,时间:225.5s

2 gpu: 88个token /秒,时间:112.9s

3 gpu: 128个token /秒,时间:77.6s

4 gpu: 137个token /秒,时间:72.7s

5 gpu: 119个token /秒,时间:83.8s

在多GPU上进行批处理

现实世界中,我们可以使用批处理推理来加快速度。这会减少GPU之间的通讯,加快推理速度。我们只需要增加prepare_prompts函数将一批数据而不是单条数据输入到模型即可:

 from accelerate import Accelerator
 from accelerate.utils import gather_object
 from transformers import AutoModelForCausalLM, AutoTokenizer
 from statistics import mean
 import torch, time, json

 accelerator = Accelerator()

 def write_pretty_json(file_path, data):
     import json
     with open(file_path, "w") as write_file:
         json.dump(data, write_file, indent=4)

 # 10*10 Prompts. Source: https://www.penguin.co.uk/articles/2022/04/best-first-lines-in-books
 prompts_all=[
     "The King is dead. Long live the Queen.",
     "Once there were four children whose names were Peter, Susan, Edmund, and Lucy.",
     "The story so far: in the beginning, the universe was created.",
     "It was a bright cold day in April, and the clocks were striking thirteen.",
     "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.",
     "The sweat wis lashing oafay Sick Boy; he wis trembling.",
     "124 was spiteful. Full of Baby's venom.",
     "As Gregor Samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a gigantic insect.",
     "I write this sitting in the kitchen sink.",
     "We were somewhere around Barstow on the edge of the desert when the drugs began to take hold.",
 ] * 10

 # load a base model and tokenizer
 model_path="models/llama2-7b"
 model = AutoModelForCausalLM.from_pretrained(
     model_path,    
     device_map={"": accelerator.process_index},
     torch_dtype=torch.bfloat16,
 )
 tokenizer = AutoTokenizer.from_pretrained(model_path)   
 tokenizer.pad_token = tokenizer.eos_token

 # batch, left pad (for inference), and tokenize
 def prepare_prompts(prompts, tokenizer, batch_size=16):
     batches=[prompts[i:i + batch_size] for i in range(0, len(prompts), batch_size)]  
     batches_tok=[]
     tokenizer.padding_side="left"     
     for prompt_batch in batches:
         batches_tok.append(
             tokenizer(
                 prompt_batch, 
                 return_tensors="pt", 
                 padding='longest', 
                 truncation=False, 
                 pad_to_multiple_of=8,
                 add_special_tokens=False).to("cuda") 
             )
     tokenizer.padding_side="right"
     return batches_tok

 # sync GPUs and start the timer
 accelerator.wait_for_everyone()    
 start=time.time()

 # divide the prompt list onto the available GPUs 
 with accelerator.split_between_processes(prompts_all) as prompts:
     results=dict(outputs=[], num_tokens=0)

     # have each GPU do inference in batches
     prompt_batches=prepare_prompts(prompts, tokenizer, batch_size=16)

     for prompts_tokenized in prompt_batches:
         outputs_tokenized=model.generate(**prompts_tokenized, max_new_tokens=100)

         # remove prompt from gen. tokens
         outputs_tokenized=[ tok_out[len(tok_in):] 
             for tok_in, tok_out in zip(prompts_tokenized["input_ids"], outputs_tokenized) ] 

         # count and decode gen. tokens 
         num_tokens=sum([ len(t) for t in outputs_tokenized ])
         outputs=tokenizer.batch_decode(outputs_tokenized)

         # store in results{} to be gathered by accelerate
         results["outputs"].extend(outputs)
         results["num_tokens"] += num_tokens

     results=[ results ] # transform to list, otherwise gather_object() will not collect correctly

 # collect results from all the GPUs
 results_gathered=gather_object(results)

 if accelerator.is_main_process:
     timediff=time.time()-start
     num_tokens=sum([r["num_tokens"] for r in results_gathered ])

     print(f"tokens/sec: {num_tokens//timediff}, time elapsed: {timediff}, num_tokens {num_tokens}")

可以看到批处理会大大加快速度。

1 GPU: 520 token /sec,时间:19.2s

2 gpu: 900 token /sec,时间:11.1s

3 gpu: 1205个token /秒,时间:8.2s

4 gpu: 1655 token /sec,时间:6.0s

5 gpu: 1658 token /sec,时间:6.0s

总结

截止到本文为止,llama.cpp,ctransformer还不支持多GPU推理,好像llama.cpp在6月有个多GPU的merge,但是我没看到官方更新,所以这里暂时确定不支持多GPU。如果有小伙伴确认可以支持多GPU请留言。

huggingface的Accelerate包则为我们使用多GPU提供了一个很方便的选择,使用多个GPU推理可以显着提高性能,但gpu之间通信的开销随着gpu数量的增加而显著增加。

https://avoid.overfit.cn/post/8210f640cae0404a88fd1c9028c6aabb

作者:Geronimo

相关实践学习
基于阿里云DeepGPU实例,用AI画唯美国风少女
本实验基于阿里云DeepGPU实例,使用aiacctorch加速stable-diffusion-webui,用AI画唯美国风少女,可提升性能至高至原性能的2.6倍。
目录
相关文章
|
2月前
|
机器学习/深度学习 异构计算 Python
Bert-vits2最终版Bert-vits2-2.3云端训练和推理(Colab免费GPU算力平台)
对于深度学习初学者来说,JupyterNoteBook的脚本运行形式显然更加友好,依托Python语言的跨平台特性,JupyterNoteBook既可以在本地线下环境运行,也可以在线上服务器上运行。GoogleColab作为免费GPU算力平台的执牛耳者,更是让JupyterNoteBook的脚本运行形式如虎添翼。 本次我们利用Bert-vits2的最终版Bert-vits2-v2.3和JupyterNoteBook的脚本来复刻生化危机6的人气角色艾达王(ada wong)。
Bert-vits2最终版Bert-vits2-2.3云端训练和推理(Colab免费GPU算力平台)
|
5天前
|
存储 机器学习/深度学习 测试技术
mnn-llm: 大语言模型端侧CPU推理优化
mnn-llm: 大语言模型端侧CPU推理优化
23 1
|
1月前
|
人工智能 弹性计算 Ubuntu
【Hello AI】安装并使用Deepnccl-多GPU互联的AI通信加速库
Deepnccl是为阿里云神龙异构产品开发的用于多GPU互联的AI通信加速库,能够无感地加速基于NCCL通信算子调用的分布式训练或多卡推理等任务。本文主要介绍在Ubuntu或CentOS操作系统的GPU实例上安装和使用Deepnccl的操作方法。
|
1月前
|
运维 自然语言处理 算法
使用NVIDIA TensorRT-LLM支持CodeFuse-CodeLlama-34B上的int4量化和推理优化实践
CodeFuse是由蚂蚁集团开发的代码语言大模型,旨在支持整个软件开发生命周期,涵盖设计、需求、编码、测试、部署、运维等关键阶段。为了在下游任务上获得更好的精度,CodeFuse 提出了多任务微调框架(MFTCoder),能够解决数据不平衡和不同收敛速度的问题。通过对比多个预训练基座模型的精度表现,我们发现利用 MFTCoder 微调后的模型显著优于原始基座模型。其中,尤为值得关注的是采用了 MFTCoder 框架,并利用多任务数据集进行微调的 CodeFuse-CodeLlama-34B模型,在HumanEval 评估数据集中取得了当时的最好结果。
38 0
使用NVIDIA TensorRT-LLM支持CodeFuse-CodeLlama-34B上的int4量化和推理优化实践
|
1月前
|
人工智能 算法 网络协议
【Hello AI】AI通信加速库Deepnccl-实现更高效的多GPU互联通信
Deepnccl是为阿里云神龙异构产品开发的一种用于多GPU互联的AI通信加速库,在AI分布式训练或多卡推理任务中用于提升通信效率。本文主要介绍Deepnccl的架构、优化原理和性能说明。
|
1月前
|
机器学习/深度学习 人工智能 自然语言处理
NeurIPS’23 Paper Digest | 如何把 LLM 的推理能力应用于事件序列预测?
我们完成了首个把 LLM 推理能力引入事件序列领域的工作。代码、数据均已经开源,并将集成进开源库 EasyTPP。
NeurIPS’23 Paper Digest | 如何把 LLM 的推理能力应用于事件序列预测?
|
2月前
|
并行计算 TensorFlow 算法框架/工具
Linux Ubuntu配置CPU与GPU版本tensorflow库的方法
Linux Ubuntu配置CPU与GPU版本tensorflow库的方法
|
2月前
|
并行计算 TensorFlow 算法框架/工具
新版本GPU加速的tensorflow库的配置方法
新版本GPU加速的tensorflow库的配置方法
|
2月前
|
机器学习/深度学习 TensorFlow 算法框架/工具
Anaconda配置Python新版本tensorflow库(CPU、GPU通用)的方法
Anaconda配置Python新版本tensorflow库(CPU、GPU通用)的方法
|
2月前
|
人工智能 并行计算 API
极智AI | 谈谈GPU并行推理的几个方式
大家好,我是极智视界,本文主要聊一下 GPU 并行推理的几个方式。
185 0

相关产品

  • 大数据开发治理平台 DataWorks
  • 检索分析服务 Elasticsearch版
  • 日志服务