近日,阿里云旗下的通义千问发布性能强大的旗舰版Qwen2.5-Max,并开源升级版视觉理解模型Qwen2.5-VL以及支持百万token长文本处理的Qwen2.5-1M,不仅展现了通义千问在大模型前沿技术领域的探索成果,更为开发者和企业提供了有力的技术支持。
旗舰版Qwen2.5-Max:对MoE模型最新探索成果
通义千问旗升级版舰版模型Qwen2.5-Max,是对MoE模型的最新探索成果,预训练数据超过20万亿tokens,综合性能强劲,在多项主流模型评测基准上录得高分。目前,开发者可在Qwen Chat平台体验模型,企业和机构也可通过阿里云百炼平台直接调用新模型API服务。
Qwen2.5-Max在知识(测试大学水平知识的MMLU-Pro)、编程(LiveCodeBench)、全面评估综合能力的(LiveBench)以及人类偏好对齐(Arena-Hard)等主流权威基准测试上,通义团队分别对Qwen2.5-Max的指令(Instruct)模型版本和基座(base)模型版本性能进行了评估测试。
指令模型是所有人可直接对话体验到的模型版本,在 Arena-Hard、LiveBench、LiveCodeBench 和 GPQA-Diamond 等基准测试中,Qwen2.5-Max 的表现超越了 DeepSeek V3。同时在 MMLU-Pro 等其他评估中也展现出了极具竞争力的成绩。
Qwen2.5-Max更是在评估全球最佳大语言模型和AI聊天机器人的权威三方基准测试平台Chatbot Arena取得瞩目成绩。 Qwen2.5-Max在Chatbot Arena最新公布的大模型盲测榜单中,总分全球排名第七,与其他顶级大模型不相上下,它在数学和编程等单项能力上排名第一,在硬提示(hard prompts),即解决挑战性任务的复杂提示方面排名第二。
Qwen2.5-Max在Chatbot Arena最新公布的大模型榜单中排名亮眼
Qwen2.5-Max在数学和编码方面位列第一,解决挑战性任务的复杂提示方面排名第二
视觉理解模型Qwen2.5-VL多模态处理能力显著提升
通义千问还开源了全新的视觉理解模型Qwen2.5-VL,推出3B、7B和72B三个尺寸版本。其中,旗舰版Qwen2.5-VL-72B在13项权威评测中夺得视觉理解冠军。目前,不同尺寸及量化版本的Qwen2.5-VL模型已在魔搭社区ModelScope、HuggingFace等平台开源,开发者也可以在Qwen Chat上直接体验最新模型。
Qwen2.5-VL展现强大多模态能力,不仅能精准识别物体和解析复杂图像内容,还可理解一小时以上的长视频,精确回答问题。此外,该模型能将非结构化数据如发票、表单转换为JSON等结构化格式,特别适合自动生成财报和法务文档等场景。
Qwen2.5-VL甚至能够直接作为视觉智能体进行操作,通过指导使用各种工具,在电脑和移动设备上轻松执行查询天气、订机票等多步骤任务。
在模型技术方面,与上一代Qwen2-VL相比,Qwen2.5-VL增强了模型对时间和空间尺度的感知能力,并进一步简化了网络结构以提高模型效率。在重要的视觉编码器设计中,通义团队从头开始训练了原生动态分辨率的ViT,并采用创新结构,让Qwen2.5-VL拥有更简洁高效的视觉编解码能力。
Qwen2.5-VL评分图
Qwen2.5-1M突破百万Token
此外,阿里云通义还开源了支持100万Tokens上下文的Qwen2.5-1M模型,推出7B及14B两个尺寸,同时开源推理框架,在处理百万级别长文本输入时可实现近7倍的提速。
Qwen2.5-1M已经在ModelScope和HuggingFace等平台开源,相关推理框架也已在GitHub上开源,开发者和企业也可通过阿里云百炼平台调用 Qwen2.5-Turbo模型API,或是通过Qwen Chat体验模型性能及效果。
Qwen2.5-1M拥有优异的长文本处理能力。在上下文长度为100万Tokens的大海捞针(Passkey Retrieval)任务中,Qwen2.5-1M 能够准确地从 1M 长度的文档中检索出隐藏信息,仅有7B模型出现了少量错误。在RULER、LV-Eval等基准对复杂长上下文理解任务测试中,Qwen2.5-14B-Instruct-1M表现出色,为开发者提供了一个现有长上下文模型的优秀开源替代。
长文本训练需大量计算资源,通义团队将Qwen2.5-1M的上下文长度从4K逐步扩展到256K,再通过Dual Chunk Attention机制,无需额外训练即可将上下文稳定扩展到1M。同时,团队在vLLM引擎基础上引入稀疏注意力机制,在多个环节进行创新优化,提高推理效率。
Qwen 2.5系列模型在RULER上的表现
Alibaba Cloud’s Qwen2.5-Max Secures Top Rankings in Chatbot Arena
Alibaba Cloud’s latest proprietary large language model(LLM), Qwen2.5-Max, has achieved impressive results on Chatbot Arena, a well-recognized open platform that evaluates the world’s best LLM and AI chatbots. Ranked #7 overall in the Arena score, Qwen2.5-Max matches other top proprietary LLMs and demonstrates exceptional capabilities, particularly in technical domains. It ranks #1 position in math and coding, and ranks #2 in hard prompts, which involve complex prompts in addressing challenging tasks, solidifying its status as a powerhouse in tackling complex tasks.
Qwen2.5-Max Ranked #7 on Chatbot Arena
Qwen2.5-Max ranks 1st in math and coding, and 2nd in hard prompts
As a cutting-edge Mixture of Experts (MoE) model, Qwen2.5-Max has been pretrained on over 20 trillion tokens and further refined with Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) techniques. Leveraging these technological advancements, Qwen2.5-Max has demonstrated exceptional strengths in knowledge, coding, general capabilities, and human alignment, securing leading scores in major benchmarks including MMLU-Pro, LiveCodeBench, LiveBench, and Arena-Hard.
Developers and businesses worldwide can seamlessly access Qwen2.5-Max through Model Studio, Alibaba Cloud’s generative AI development platform, offering both high performance and cost-efficiently. They can also experience the model’s capability on the Qwen Chat platform.
Over the past year, Alibaba Cloud has continuously expanded the Qwen family, releasing a series of Qwen models across text, audio, and visual formats in various sizes to meet the increasing AI demands from developers and customers worldwide. Last month, it unveiled its latest open-sourced, visual-language model, Qwen2.5-VL, which exhibits remarkable multimodal capabilities and can act as a visual agent to facilitate task execution on computers and mobile devices. It also released Qwen2.5-1M, an open-source model capable of processing long context inputs of up to 1 million tokens. Earlier this year, it has unveiled an expanded suite of LLMs and AI development tools, upgraded infrastructure offerings, and new support programs for global developers during its Global Developer Summit in Jakarta.
Qwen2.5-VL's rating
Series of Qwen2.5 models performance on RULER