【阅读原文】戳:部署DeepSeek但IDC GPU不足,阿里云ACK Edge虚拟节点来帮忙
阿里云ACK Edge集群采用云边一体化架构,云上托管Kubernetes控制面,接入IDC机器做为Kubernetes集群数据面节点。实现IDC机器的Kubernetes容器化管理,实现现有资源的利旧,提高应用的部署运维效率。
目前AI大模型业务快速发展,ACK Edge已经帮助大量客户管理IDC中GPU机器,使用容器快速部署AI大模型推理业务。但随着DeepSeek发布R1模型,模型对GPU的需求越来越高,DeepSeek R1使用MOE模型,最少需要8卡以上机型部署,另外,由于DeepSeek R1模型原生使用FP8训练,需要较新的GPU卡型以获得高性价比。这些都给IDC GPU资源提出了挑战,通过ACK Edge的虚拟节点,可以快速接入云上ACS Serverless GPU算力,部署运行DeepSeek推理服务。
本文介绍通过ACK Edge管理IDC GPU机器,通过ACK AI套件部署DeepSeek推理服务,优先在IDC GPU上运行推理Pod,当IDC GPU资源不足时,通过ACK Edge的虚拟节点,创建云上的ACS Serverless GPU算力运行DeepSeek推理Pod,满足业务扩展的需求,并实现成本优化。
基于ACK Edge与虚拟节点的弹性ACS Serverless GPU方案
• 本地IDC的资源与云上VPC专线打通;
• 将本地资源接入到ACK Edge,实现从云上对IDC的业务做统一管理和调度;
• 为业务配置自定义调度策略,优先调度到本地IDC资源,本地资源不足时再调度到云上虚拟节点;
• 为业务配置HPA,当达到资源阈值时,自动触发扩容。
方案优势
• 极致弹性:可以提供大规模秒极的弹性伸缩能力,快速应对流量高峰场景;
• 成本精细化:无需自购服务器,按量付费,成本透明可控;
• 丰富的弹性资源:支持CPU、GPU等不同的机型。
使用示例
准备环境
• 选择一个地域作为中心地域,在该地域下创建ACK Edge集群[1];
• 安装virtual-node组件。具体操作,请参见组件管理[5];
• 安装Kserve,请参见管理ack-kserve组件[6];
• 安装Arena,请参见配置Arena客户端[7];
• 部署监控组件并配置GPU监控指标,请基于GPU指标实现弹性伸缩[8];
• 创建专用网络的边缘节点池[9],并将IDC的资源添加到边缘节点池[10]中;
操作步骤
步骤一:准备DeepSeek-R1-Distill-Qwen-7B模型文件
1) 执行以下命令从ModelScope下载DeepSeek-R1-Distill-Qwen-7B模型。
git lfs install GIT_LFS_SKIP_SMUDGE=1 git clone https://www.modelscope.cn/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B.git cd DeepSeek-R1-Distill-Qwen-7B/ git lfs pull
2)在OSS中创建目录,将模型上传至OSS。
关于ossutil工具的安装和使用方法,请参见安装ossutil[11]。
ossutil mkdir oss://<your-bucket-name>/models/DeepSeek-R1-Distill-Qwen-7B ossutil cp -r ./DeepSeek-R1-Distill-Qwen-7B oss://<your-bucket-name>/models/DeepSeek-R1-Distill-Qwen-7B
3)创建PV和PVC。为目标集群配置名为llm-mode的存储卷PV和存储声明PVC。具体操作,请参见静态挂载OSS存储卷[12]。以下为示例YAML:
apiVersion: v1 kind: Secret metadata: name: oss-secret stringData: akId: <your-oss-ak> # 配置用于访问OSS的AccessKey ID akSecret: <your-oss-sk> # 配置用于访问OSS的AccessKey Secret --- apiVersion: v1 kind: PersistentVolume metadata: name: llm-model labels: alicloud-pvname: llm-model spec: capacity: storage: 30Gi accessModes: - ReadOnlyMany persistentVolumeReclaimPolicy: Retain csi: driver: ossplugin.csi.alibabacloud.com volumeHandle: llm-model nodePublishSecretRef: name: oss-secret namespace: default volumeAttributes: bucket: <your-bucket-name> # bucket名称 url: <your-bucket-endpoint> # Endpoint信息,推荐使用内网地址,如oss-cn-hangzhou-internal.aliyuncs.com otherOpts: "-o umask=022 -o max_stat_cache_size=0 -o allow_other" path: <your-model-path> # 本示例中为/models/DeepSeek-R1-Distill-Qwen-7B/ --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: llm-model spec: accessModes: - ReadOnlyMany resources: requests: storage: 30Gi selector: matchLabels: alicloud-pvname: llm-model
步骤二:创建自定义调度策略
配置调度优先级,优先调度到边缘节点池上,如果边缘节点池资源不足调度到virtual node上。
• 执行命令:kubectl create -f nginx-resoucepolicy.yaml。
apiVersion: scheduling.alibabacloud.com/v1alpha1 kind: ResourcePolicy metadata: name: deepseek namespace: default spec: selector: app: isvc.deepseek-predictor # 此处要与后续创建的Pod的label相关联。 strategy: prefer units: - resource: ecs nodeSelector: alibabacloud.com/nodepool-id: np********* #边缘节点池ID - resource: eci
步骤三:部署模型
1)执行如下命令,查询集群中的节点情况。
kubectl get nodes -owide
预期输出:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME cn-hangzhou.10.4.0.25 Ready <none> 10d v1.30.7-aliyun.1 10.4.0.25 <none> Alibaba Cloud Linux 3.2104 U11 (OpenAnolis Edition) 5.10.134-18.al8.x86_64 containerd://1.6.36 cn-hangzhou.10.4.0.26 Ready <none> 10d v1.30.7-aliyun.1 10.4.0.26 <none> Alibaba Cloud Linux 3.2104 U11 (OpenAnolis Edition) 5.10.134-18.al8.x86_64 containerd://1.6.36 idc001 Ready <none> 31s v1.30.7-aliyun.1 10.4.0.185 <none> Alibaba Cloud Linux 3.2104 U11 (OpenAnolis Edition) 5.10.134-18.al8.x86_64 containerd://1.6.36 virtual-kubelet-cn-hangzhou-b Ready agent 7d21h v1.30.7-aliyun.1 10.4.0.180 <none> <unknown> <unknown> <unknown>
其中,有一个IDC节点(idc001)和一个虚拟节点(virtual-kubelet-cn-hangzhou-b)。该IDC节点有一张V100的GPU卡。
2)执行下列命令,基于vLLM模型推理框架部署DeepSeek模型推理服务。
arena serve kserve \ --name=deepseek \ --annotation=k8s.aliyun.com/eci-use-specs=ecs.gn6e-c12g1.3xlarge \ --annotation=k8s.aliyun.com/eci-vswitch=vsw-*********,vsw-********* \ --image=kube-ai-registry.cn-shanghai.cr.aliyuncs.com/kube-ai/vllm:v0.6.6 \ --gpus=1 \ --cpu=4 \ --memory=12Gi \ --scale-metric=DCGM_CUSTOM_PROCESS_SM_UTIL \ --scale-target=50 \ --min-replicas=1 \ --max-replicas=3 \ --data=llm-model:/model/DeepSeek-R1-Distill-Qwen-7B \ "vllm serve /model/DeepSeek-R1-Distill-Qwen-7B --port 8080 --trust-remote-code --served-model-name deepseek-r1 --max-model-len 32768 --gpu-memory-utilization 0.95 --enforce-eager --dtype=half"
预期输出:
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /Users/bingchang/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /Users/bingchang/.kube/config horizontalpodautoscaler.autoscaling/deepseek-hpa created inferenceservice.serving.kserve.io/deepseek created INFO[0002] The Job deepseek has been submitted successfully INFO[0002] You can run `arena serve get deepseek --type kserve -n default` to check the job status
3)执行下列命令,查看推理服务详细信息。
arena serve get deepseek
预期输出:
Name: deepseek Namespace: default Type: KServe Version: 1 Desired: 1 Available: 1 Age: 1m Address: http://deepseek-default.example.com Port: :80 GPU: 1 Instances: NAME STATUS AGE READY RESTARTS GPU NODE ---- ------ --- ----- -------- --- ---- deepseek-predictor-6b9455f8c5-wl5lc Running 1m 1/1 0 1 idc001
从结果可以看到,推理服务的业务Pod被调度到了IDC节点。
4)部署完成后,可以直接请求服务来验证是够部署成功,请求的地址可以在KServe自动创建的Ingress资源详情中找到。
curl -H "Host: deepseek-default.example.com" -H "Content-Type: application/json" http://<idc-node-ip>:<ingress-svc-nodeport>/v1/chat/completions -d '{"model": "deepseek-r1", "messages": [{"role": "user", "content": "Say this is a test!"}], "max_tokens": 512, "temperature": 0.7, "top_p": 0.9, "seed": 10}'
预期输出:
{"id":"chatcmpl-efc1225ad2f33cc39a8ddbc4039a41b9","object":"chat.completion","created":1739861087,"model":"deepseek-r1","choices":[{"index":0,"message":{"role":"assistant","content":"Okay, so I need to figure out how to say \"This is a test!\" in Spanish. Hmm, I'm not super fluent in Spanish, but I know some basic phrases. Let me think about how to approach this.\n\nFirst, I remember that \"test\" is \"prueba\" in Spanish. So maybe I can start with \"Esto es una prueba.\" But I'm not sure if that's the best way to say it. Maybe there's a more common expression or a different structure.\n\nWait, I think there's a phrase that's commonly used in tests. Isn't it something like \"This is a test.\" or \"This is a quiz.\"? I think the Spanish equivalent would be \"Este es un test.\" That sounds more natural. Let me check if that makes sense.\n\nI can also think about how people use phrases in tests. Maybe they use \"This is the test\" or \"This is an exam.\" So perhaps \"Este es el test.\" or \"Este es el examen.\" I'm not sure which one is more appropriate.\n\nI should also consider the grammar. \"This is a test\" is a simple statement, so the subject is \"this\" (using \"este\"), the verb is \"is\" (using \"es\"), and the object is \"a test\" (using \"un test\"). So putting it together, it would be \"Este es un test.\"\n\nWait, but sometimes people use \"This is the test\" when referring to an important one, so maybe \"Este es el test.\" But I'm not entirely sure if that's the correct structure. Let me think about other similar phrases.\n\nI also recall that in some contexts, people might say \"This is a practice test\" or \"This is a sample test.\" But since the user just said \"This is a test,\" the most straightforward translation would be \"Este es un test.\"\n\nI should also consider if there are any idiomatic expressions or common phrases that are used in this context. For example, \"This is the test\" is often used to mean a significant exam or evaluation, so \"Este es el test\" might be more appropriate in that context.\n\nBut I'm a bit confused because I'm not 100% sure about the correct structure. Maybe I should look up some examples. Oh, wait, I can't look things up right now, so I'll have to rely on my memory.\n\nI think the basic structure is subject + verb + object. So \"this\" (this is \"este","tool_calls":[]},"logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":11,"total_tokens":523,"completion_tokens":512,"prompt_tokens_details":null},"prompt_logprobs":null}
步骤四:模拟业务高峰请求,触发云上弹性
我们通过压测工具hey来模拟发送大量的请求到这个服务中。
hey -z 5m -c 5 \ -m POST -host deepseek-default.example.com \ -H "Content-Type: application/json" \ -d '{"model": "deepseek-r1", "messages": [{"role": "user", "content": "Say this is a test!"}], "max_tokens": 512, "temperature": 0.7, "top_p": 0.9, "seed": 10}' \ http://<idc-node-ip>:<ingress-svc-nodeport>/v1/chat/completions
由于请求太多,GPU使用率上升超过阈值,此时会触发Pod扩容。执行命令查看推理服务详情:
arena serve get deepseek
预期输出:
Name: deepseek Namespace: default Type: KServe Version: 1 Desired: 3 Available: 2 Age: 18m Address: http://deepseek-default.example.com Port: :80 GPU: 3 Instances: NAME STATUS AGE READY RESTARTS GPU NODE ---- ------ --- ----- -------- --- ---- deepseek-predictor-6b9455f8c5-dtzdv Running 1m 0/1 0 1 virtual-kubelet-cn-hangzhou-h deepseek-predictor-6b9455f8c5-wl5lc Running 18m 1/1 0 1 idc001 deepseek-predictor-6b9455f8c5-zmpg8 Running 5m 1/1 0 1 virtual-kubelet-cn-hangzhou-h
此时,在虚拟节点上扩容出了两个副本。
总结
ACK Edge采用云边一体化的云原生架构,为用户托管Kubernetes集群的控制面,支持纳管IDC资源、ENS资源、跨地域ECS资源等。在为用户降低分布式资源和业务管理复杂性的同时,能够与云上现有的弹性能力无缝融合,解决本地服务的弹性需求。ACK Edge与虚拟节点结合,能够更好低应对突发场景,更加精细化控制资源成本,保障业务的稳定运行。
相关文档:
[1] 创建ACK Edge集群
https://help.aliyun.com/zh/ack/ack-edge/user-guide/create-an-ack-edge-cluster-1
[2] 添加边缘节点
https://help.aliyun.com/zh/ack/ack-edge/user-guide/add-an-edge-node
[3] 虚拟节点管理
https://help.aliyun.com/zh/ack/ack-edge/user-guide/virtual-node-management
[4] 自定义弹性优先级调度
[5] ACK Edge集群组件管理
[6] 在ACK集群中部署和管理ack-kserve组件
[7] 配置Arena客户端
https://help.aliyun.com/zh/ack/cloud-native-ai-suite/user-guide/install-arena#task-1917487
[8] 基于GPU指标实现弹性伸缩
[9] 创建和管理边缘节点池
[10] 添加边缘节点
[11] 安装ossutil
[12] 静态挂载OSS存储卷
https://help.aliyun.com/zh/cs/user-guide/oss-child-node-1
我们是阿里巴巴云计算和大数据技术幕后的核心技术输出者。
获取关于我们的更多信息~