在众多模型中Qwen2.5 的表现十分亮眼,其在基础日语处理方面有非常显著的性能提升,搭配 Axcxept 独特的训练过程,更能促成高准确度的日语 LLM 开发。
——Kazuya Hodatsu
Axcxept首席执行官
关于Axcxept
Axcxept是一家专注于人工智能(AI)和云计算技术的公司,致力于开发创新的解决方案和服务,尤其是在日语处理和大语言模型(LLM)领域。公司名称“Axcxept”反映了其积极拥抱AI与云计算技术的愿景和希望通过高度适应变化的方式引领科技变革的想法。Axcxept的核心目标是提供安全、高效的行业领先解决方案,帮助各行业实现技术转型。
目前,日本大语言模型领域尚缺乏一个专门针对日语使用者的主导性“首选”模型。Axcxept旨在填补这一空白,开发一款能够精准处理复杂日语结构,并且特别适用于需要细致文化元素的应用场景的大语言模型。
选择阿里云
鉴于Qwen2.0在日语处理能力方面的卓越表现,其已在社交媒体上引发了广泛的关注和热议。随后推出的Qwen2.5更是将这一优势推向了新的高度,不仅进一步提升了日语语言处理的精准度与流畅度,还赋予了用户轻松定制和扩展所需功能的灵活性,赢得了国际好评。最终经多方考量,Axcxept选择了阿里云作为合作伙伴。
架构
EZO系列基于Qwen2.5进行微调优化。32B和72B模型配备了包含Auto-CoT和实时增强生成(RAG)技术。这使得复杂的日语语言支持成为可能,并能够进行持续更新。
由于EZO × Qwen2.5 可以在本地环境中运行,避免了对外部网络的依赖,从而确保了高数据安全性。这使得它特别适合于对隐私和安全有高度要求的行业,如医疗和政府机构。
主要成果
- 基于 Qwen2.5 微调: EZO × Qwen2.5 模型在 Qwen2.5 的基础上进行优化,以增强其在日语语言处理中的表现。它能够处理复杂的日语语言结构,同时保持高精度,特别适合需要细致文化元素的应用场景。
- 高准确度:在日本的 MT Bench(机器翻译基准测试)中,EZO × Qwen2.5 超越了现有的封闭模型,达到了更高的准确度。这使得它在日本市场中成为一种非常强大的语言处理工具。
- 轻量级与低延迟: 该模型被设计为轻量级且具有低延迟,能够在大规模服务器上快速运行。这使其适用于各种行业,包括需要高速计算的领域,如金融、科技和公共服务等。
- 支持复杂应用: EZO × Qwen2.5 不仅能支持高精度的日语翻译,还能在编码、数据提取、数学推理、角色扮演等多方面表现出色,其平均总分为8.44,优于GPT-4-Turbo的8.35分。
- 安全性与隐私: EZO × Qwen2.5 可以在本地环境中运行,避免了对外部网络的依赖,从而确保了高数据安全性。这使得它特别适合于对隐私和安全有高度要求的行业,如医疗和政府机构。
- 持续更新与扩展: EZO × Qwen2.5 支持实时增强生成(RAG)和持续更新技术,使其能够随着需求变化进行自我优化,提供不断提升的服务质量。
Qwen2.5 has enhanced its performance in base Japanese processing, providing it with an edge over other models. Axcxept's proprietary training process has led to the development of a Japanese LLM with the highest level of accuracy.
Kazuya Hodatsu
CEO of Axcxept Inc.
About Axcxept
Axcxept focuses on maximizing the power of AI and cloud computing. The invention of company name "Axcxept" underscores their determination to embrace the technological intersection of AI and cloud, and to transform to the changing times in a highly adaptable manner. Above all else, the company pursues safety and efficiency, and provides industry-leading diversified solutions and services.
In the LLM landscape of Japan, there isn’t a dominant “go-to” LLM specifically for the Japanese speaker market. Axcxept wanted to pinpoint a base LLM that worked better with Japanese as a language, as well as incorporating nuanced cultural substance.
Why Alibaba Cloud
Qwen2.0's impressive Japanese language processing capabilities attracted a lot of interest across social media. Following this, Qwen2.5 has improved Japanese language processing capabilities even further. Additionally, the flexibility of being able to customize and expand the necessary functions easily has also gained a lot of attention internationally.
Architecture
The EZO series is implemented with fine tuning based on Qwen2.5. The 32B and 72B models are equipped with technology incorporating Auto-CoT and Real-time Augmented Generation (RAG). This allows for complex Japanese language support, as well as continuous updating.
Since the EZO x Qwen2.5 can be built in a local environment that does not need to communicate with external networks, it is suitable for industries where high security is an important factor, such as medical and public institutions.
Key Results
A fine-tuned implementation based on Qwen2.5, EZO x Qwen2.5 achieved accuracy that exceeded that of existing closed models in the Japanese MT Bench, the top Japanese language evaluation index in Japan.
It also has the advantage of being lightweight and low latency, and is able to run at high speed on large-scale servers, typically found in many industries.
The EZO x Qwen2.5 model outperforms GPT-4-Turbo across areas including coding, extraction, math, reasoning, roleplay, and writing with a total average score of 8.44 compared to the GPT score of 8.35.