使用Accelerate库在多GPU上进行LLM推理

简介: 大型语言模型(llm)已经彻底改变了自然语言处理领域。随着这些模型在规模和复杂性上的增长,推理的计算需求也显著增加。为了应对这一挑战利用多个gpu变得至关重要。

所以本文将在多个gpu上并行执行推理,主要包括:Accelerate库介绍,简单的方法与工作代码示例和使用多个gpu的性能基准测试。

本文将使用多个3090将llama2-7b的推理扩展在多个GPU上

基本示例

我们首先介绍一个简单的示例来演示使用Accelerate进行多gpu“消息传递”。

 from accelerate import Accelerator
 from accelerate.utils import gather_object

 accelerator = Accelerator()

 # each GPU creates a string
 message=[ f"Hello this is GPU {accelerator.process_index}" ] 

 # collect the messages from all GPUs
 messages=gather_object(message)

 # output the messages only on the main process with accelerator.print() 
 accelerator.print(messages)

输出如下:

 ['Hello this is GPU 0', 
   'Hello this is GPU 1', 
   'Hello this is GPU 2', 
   'Hello this is GPU 3', 
   'Hello this is GPU 4']

多GPU推理

下面是一个简单的、非批处理的推理方法。代码很简单,因为Accelerate库已经帮我们做了很多工作,我们直接使用就可以:

 from accelerate import Accelerator
 from accelerate.utils import gather_object
 from transformers import AutoModelForCausalLM, AutoTokenizer
 from statistics import mean
 import torch, time, json

 accelerator = Accelerator()

 # 10*10 Prompts. Source: https://www.penguin.co.uk/articles/2022/04/best-first-lines-in-books
 prompts_all=[
     "The King is dead. Long live the Queen.",
     "Once there were four children whose names were Peter, Susan, Edmund, and Lucy.",
     "The story so far: in the beginning, the universe was created.",
     "It was a bright cold day in April, and the clocks were striking thirteen.",
     "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.",
     "The sweat wis lashing oafay Sick Boy; he wis trembling.",
     "124 was spiteful. Full of Baby's venom.",
     "As Gregor Samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a gigantic insect.",
     "I write this sitting in the kitchen sink.",
     "We were somewhere around Barstow on the edge of the desert when the drugs began to take hold.",
 ] * 10

 # load a base model and tokenizer
 model_path="models/llama2-7b"
 model = AutoModelForCausalLM.from_pretrained(
     model_path,    
     device_map={"": accelerator.process_index},
     torch_dtype=torch.bfloat16,
 )
 tokenizer = AutoTokenizer.from_pretrained(model_path)   

 # sync GPUs and start the timer
 accelerator.wait_for_everyone()
 start=time.time()

 # divide the prompt list onto the available GPUs 
 with accelerator.split_between_processes(prompts_all) as prompts:
     # store output of generations in dict
     results=dict(outputs=[], num_tokens=0)

     # have each GPU do inference, prompt by prompt
     for prompt in prompts:
         prompt_tokenized=tokenizer(prompt, return_tensors="pt").to("cuda")
         output_tokenized = model.generate(**prompt_tokenized, max_new_tokens=100)[0]

         # remove prompt from output 
         output_tokenized=output_tokenized[len(prompt_tokenized["input_ids"][0]):]

         # store outputs and number of tokens in result{}
         results["outputs"].append( tokenizer.decode(output_tokenized) )
         results["num_tokens"] += len(output_tokenized)

     results=[ results ] # transform to list, otherwise gather_object() will not collect correctly

 # collect results from all the GPUs
 results_gathered=gather_object(results)

 if accelerator.is_main_process:
     timediff=time.time()-start
     num_tokens=sum([r["num_tokens"] for r in results_gathered ])

     print(f"tokens/sec: {num_tokens//timediff}, time {timediff}, total tokens {num_tokens}, total prompts {len(prompts_all)}")

使用多个gpu会导致一些通信开销:性能在4个gpu时呈线性增长,然后在这种特定设置中趋于稳定。当然这里的性能取决于许多参数,如模型大小和量化、提示长度、生成的令牌数量和采样策略,所以我们只讨论一般的情况

1 GPU: 44个token /秒,时间:225.5s

2 gpu: 88个token /秒,时间:112.9s

3 gpu: 128个token /秒,时间:77.6s

4 gpu: 137个token /秒,时间:72.7s

5 gpu: 119个token /秒,时间:83.8s

在多GPU上进行批处理

现实世界中,我们可以使用批处理推理来加快速度。这会减少GPU之间的通讯,加快推理速度。我们只需要增加prepare_prompts函数将一批数据而不是单条数据输入到模型即可:

 from accelerate import Accelerator
 from accelerate.utils import gather_object
 from transformers import AutoModelForCausalLM, AutoTokenizer
 from statistics import mean
 import torch, time, json

 accelerator = Accelerator()

 def write_pretty_json(file_path, data):
     import json
     with open(file_path, "w") as write_file:
         json.dump(data, write_file, indent=4)

 # 10*10 Prompts. Source: https://www.penguin.co.uk/articles/2022/04/best-first-lines-in-books
 prompts_all=[
     "The King is dead. Long live the Queen.",
     "Once there were four children whose names were Peter, Susan, Edmund, and Lucy.",
     "The story so far: in the beginning, the universe was created.",
     "It was a bright cold day in April, and the clocks were striking thirteen.",
     "It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.",
     "The sweat wis lashing oafay Sick Boy; he wis trembling.",
     "124 was spiteful. Full of Baby's venom.",
     "As Gregor Samsa awoke one morning from uneasy dreams he found himself transformed in his bed into a gigantic insect.",
     "I write this sitting in the kitchen sink.",
     "We were somewhere around Barstow on the edge of the desert when the drugs began to take hold.",
 ] * 10

 # load a base model and tokenizer
 model_path="models/llama2-7b"
 model = AutoModelForCausalLM.from_pretrained(
     model_path,    
     device_map={"": accelerator.process_index},
     torch_dtype=torch.bfloat16,
 )
 tokenizer = AutoTokenizer.from_pretrained(model_path)   
 tokenizer.pad_token = tokenizer.eos_token

 # batch, left pad (for inference), and tokenize
 def prepare_prompts(prompts, tokenizer, batch_size=16):
     batches=[prompts[i:i + batch_size] for i in range(0, len(prompts), batch_size)]  
     batches_tok=[]
     tokenizer.padding_side="left"     
     for prompt_batch in batches:
         batches_tok.append(
             tokenizer(
                 prompt_batch, 
                 return_tensors="pt", 
                 padding='longest', 
                 truncation=False, 
                 pad_to_multiple_of=8,
                 add_special_tokens=False).to("cuda") 
             )
     tokenizer.padding_side="right"
     return batches_tok

 # sync GPUs and start the timer
 accelerator.wait_for_everyone()    
 start=time.time()

 # divide the prompt list onto the available GPUs 
 with accelerator.split_between_processes(prompts_all) as prompts:
     results=dict(outputs=[], num_tokens=0)

     # have each GPU do inference in batches
     prompt_batches=prepare_prompts(prompts, tokenizer, batch_size=16)

     for prompts_tokenized in prompt_batches:
         outputs_tokenized=model.generate(**prompts_tokenized, max_new_tokens=100)

         # remove prompt from gen. tokens
         outputs_tokenized=[ tok_out[len(tok_in):] 
             for tok_in, tok_out in zip(prompts_tokenized["input_ids"], outputs_tokenized) ] 

         # count and decode gen. tokens 
         num_tokens=sum([ len(t) for t in outputs_tokenized ])
         outputs=tokenizer.batch_decode(outputs_tokenized)

         # store in results{} to be gathered by accelerate
         results["outputs"].extend(outputs)
         results["num_tokens"] += num_tokens

     results=[ results ] # transform to list, otherwise gather_object() will not collect correctly

 # collect results from all the GPUs
 results_gathered=gather_object(results)

 if accelerator.is_main_process:
     timediff=time.time()-start
     num_tokens=sum([r["num_tokens"] for r in results_gathered ])

     print(f"tokens/sec: {num_tokens//timediff}, time elapsed: {timediff}, num_tokens {num_tokens}")

可以看到批处理会大大加快速度。

1 GPU: 520 token /sec,时间:19.2s

2 gpu: 900 token /sec,时间:11.1s

3 gpu: 1205个token /秒,时间:8.2s

4 gpu: 1655 token /sec,时间:6.0s

5 gpu: 1658 token /sec,时间:6.0s

总结

截止到本文为止,llama.cpp,ctransformer还不支持多GPU推理,好像llama.cpp在6月有个多GPU的merge,但是我没看到官方更新,所以这里暂时确定不支持多GPU。如果有小伙伴确认可以支持多GPU请留言。

huggingface的Accelerate包则为我们使用多GPU提供了一个很方便的选择,使用多个GPU推理可以显着提高性能,但gpu之间通信的开销随着gpu数量的增加而显著增加。

https://avoid.overfit.cn/post/8210f640cae0404a88fd1c9028c6aabb

作者:Geronimo

相关实践学习
在云上部署ChatGLM2-6B大模型(GPU版)
ChatGLM2-6B是由智谱AI及清华KEG实验室于2023年6月发布的中英双语对话开源大模型。通过本实验,可以学习如何配置AIGC开发环境,如何部署ChatGLM2-6B大模型。
目录
相关文章
|
4月前
|
存储 机器学习/深度学习 算法
​​LLM推理效率的范式转移:FlashAttention与PagedAttention正在重塑AI部署的未来​
本文深度解析FlashAttention与PagedAttention两大LLM推理优化技术:前者通过分块计算提升注意力效率,后者借助分页管理降低KV Cache内存开销。二者分别从计算与内存维度突破性能瓶颈,显著提升大模型推理速度与吞吐量,是当前高效LLM系统的核心基石。建议收藏细读。
932 125
|
3月前
|
人工智能 自然语言处理 TensorFlow
134_边缘推理:TensorFlow Lite - 优化移动端LLM部署技术详解与实战指南
在人工智能与移动计算深度融合的今天,将大语言模型(LLM)部署到移动端和边缘设备已成为行业发展的重要趋势。TensorFlow Lite作为专为移动和嵌入式设备优化的轻量级推理框架,为开发者提供了将复杂AI模型转换为高效、低功耗边缘计算解决方案的强大工具。随着移动设备硬件性能的不断提升和模型压缩技术的快速发展,2025年的移动端LLM部署已不再是遥远的愿景,而是正在成为现实的技术实践。
|
9月前
|
人工智能 自然语言处理 测试技术
能够双向推理的LLM!Dream-7B:港大联合华为开源的扩散推理模型,能够同时考虑前后文信息
Dream-7B是由香港大学与华为诺亚方舟实验室联合研发的开源扩散大语言模型,采用独特的掩码扩散范式,在文本生成、数学推理和代码编写等任务中展现出卓越性能。
481 3
能够双向推理的LLM!Dream-7B:港大联合华为开源的扩散推理模型,能够同时考虑前后文信息
|
4月前
|
机器学习/深度学习 人工智能 前端开发
解决推理能力瓶颈,用因果推理提升LLM智能决策
从ChatGPT到AI智能体,标志着AI从对话走向自主执行复杂任务的能力跃迁。AI智能体可完成销售、旅行规划、外卖点餐等多场景任务,但其发展受限于大语言模型(LLM)的推理能力。LLM依赖统计相关性,缺乏对因果关系的理解,导致在非确定性任务中表现不佳。结合因果推理与内省机制,有望突破当前AI智能体的推理瓶颈,提升其决策准确性与自主性。
508 6
解决推理能力瓶颈,用因果推理提升LLM智能决策
|
3月前
|
机器学习/深度学习 缓存 PyTorch
131_推理加速:ONNX与TensorRT深度技术解析与LLM模型转换优化实践
在大语言模型(LLM)时代,高效的推理加速已成为部署高性能AI应用的关键挑战。随着模型规模的不断扩大(从BERT的数亿参数到GPT-4的数千亿参数),推理过程的计算成本和延迟问题日益突出。ONNX(开放神经网络交换格式)和TensorRT作为业界领先的推理优化框架,为LLM的高效部署提供了强大的技术支持。本文将深入探讨LLM推理加速的核心原理,详细讲解PyTorch模型转换为ONNX和TensorRT的完整流程,并结合2025年最新优化技术,提供可落地的代码实现与性能调优方案。
|
10月前
|
机器学习/深度学习 人工智能 缓存
英伟达提出全新Star Attention,10倍加速LLM推理!登顶Hugging Face论文榜
英伟达推出的Star Attention技术,旨在解决Transformer模型在长序列推理中的高计算成本与速度瓶颈问题。通过两阶段块稀疏近似方法,第一阶段利用块局部注意力并行处理上下文信息,第二阶段通过全局注意力机制交互查询与缓存令牌,从而显著提升计算效率并减少通信开销。该技术可无缝集成到现有LLM中,将内存需求和推理时间降低多达11倍,同时保持高准确性。然而,其在极长序列处理中可能面临内存限制,并增加模型复杂性。尽管如此,Star Attention为长序列推理提供了创新解决方案,推动了Transformer模型的实际应用潜力。
200 19
|
4月前
|
存储 缓存 负载均衡
LLM推理成本直降60%:PD分离在大模型商业化中的关键价值
在LLM推理中,Prefill(计算密集)与Decode(访存密集)阶段特性不同,分离计算可提升资源利用率。本文详解vLLM框架中的PD分离实现及局限,并分析Dynamo、Mooncake、SGLang等主流方案,探讨KV缓存、传输机制与调度策略,助力LLM推理优化。建议点赞收藏,便于后续查阅。
2478 4
|
3月前
|
缓存 监控 安全
80_离线环境搭建:无互联网LLM推理
在当今大语言模型(LLM)蓬勃发展的时代,许多组织和个人面临着一个共同的挑战:如何在无互联网连接的环境中高效部署和使用LLM?这一需求源于多方面的考量,包括数据安全、隐私保护、网络限制、极端环境作业等。2025年,随着企业对数据主权意识的增强和边缘计算的普及,离线LLM部署已成为AI应用落地的关键场景之一。
|
6月前
|
人工智能 自然语言处理 API
AI-Compass LLM推理框架+部署生态:整合vLLM、SGLang、LMDeploy等顶级加速框架,涵盖本地到云端全场景部署
AI-Compass LLM推理框架+部署生态:整合vLLM、SGLang、LMDeploy等顶级加速框架,涵盖本地到云端全场景部署
AI-Compass LLM推理框架+部署生态:整合vLLM、SGLang、LMDeploy等顶级加速框架,涵盖本地到云端全场景部署