近日,上海人工智能实验室(上海AI实验室)重磅开源发布了多模态大模型书生·万象 InternVL3.5,通过创新的级联式强化学习(Cascade RL)、动态视觉分辨率路由与解耦部署架构,实现推理能力、部署效率与通用能力的全面升级。
本次,InternVL3.5开源了从1B到241B 各尺寸参数的全量级版本,均刷新了开源模型性能标杆,在通用多模态感知、多模态推理、文本能力等各种任务均达到领先水平,同时在图形用户界面(GUI)智能体、具身空间感知、矢量图像理解与生成等多种特色任务上取得了显著的性能提升。
技术报告:
https://www.modelscope.cn/papers/2508.18265
代码开源/模型使用方法:
https://github.com/OpenGVLab/InternVL
模型合集:
https://www.modelscope.cn/collections/InternVL35-Full-3871e58bf21349
在线体验:
https://chat.intern-ai.org.cn/
编辑
01.模型亮点
- 提供10亿至2410亿参数共九种尺寸模型,覆盖不同资源需求场景,包含稠密模型和专家混合模型(MoE),首个支持GPT-OSS语言模型基座的开源多模态大模型;
- 旗舰模型InternVL3.5-241B-A28B在多学科推理基准MMMU中获得开源模型最高分77.7分,多模态通用感知基准MMStar和OCRBench分别取得77.9分和90.7分,超越GPT-5(75.7分/80.7分),文本推理基准AIME25和MMLU-Pro分别达到75.6和81.3分,全面领先现有开源多模态大模型;
- 依托级联式强化学习框架(Cascade RL),全系列模型推理性能相比上一代平均提升16.0分。其中InternVL3.5-241B-A28B综合推理性能达到66.9分,超越上一代模型的54.6分以及Claude-3.7-Sonnet的53.9分,在数学推理、逻辑推理等复杂任务中表现突出;
- 借助创新的视觉分辨率路由(ViR)与解耦部署框架(DvD),38B模型在896分辨率下的响应速度大幅提升,单次推理延迟由369 ms缩短至91 ms(提升约4倍);与此同时,轻量化的InternVL3.5-Flash在将视觉序列长度减少 50% 的情况下,仍能保持接近 100% 的性能水平;
- 加强GUI智能体、具身智能体、SVG图形理解与生成等智能体核心能力,在ScreenSpot GUI定位(92.9分)、VSI-Bench空间推理(69.5分)、SGP-Bench矢量图理解(70.6分)等任务中超越主流开源模型。
编辑
模型能力展示详见官方详细案例
开源多模态大模型新突破,书生·万象3.5发布,通用能力、推理能力与部署效率全面升级
02.模型推理
官方提供了使用 transformers
运行 InternVL3.5-8B
的示例代码。模型最多可以部署在单张 A100 GPU 上,而 38B 模型需要2张 A100 GPU,235B 模型则需要8张 A100 GPU
请使用 transformers>=4.52.1 以确保模型正常工作。对于20B 版本的模型,需要 transformers>=4.55.0。
import math import numpy as np import torch import torchvision.transforms as T from decord import VideoReader, cpu from PIL import Image from torchvision.transforms.functional import InterpolationMode from modelscope import AutoModel, AutoTokenizer IMAGENET_MEAN = (0.485, 0.456, 0.406) IMAGENET_STD = (0.229, 0.224, 0.225) def build_transform(input_size): MEAN, STD = IMAGENET_MEAN, IMAGENET_STD transform = T.Compose([ T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img), T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC), T.ToTensor(), T.Normalize(mean=MEAN, std=STD) ]) return transform def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size): best_ratio_diff = float('inf') best_ratio = (1, 1) area = width * height for ratio in target_ratios: target_aspect_ratio = ratio[0] / ratio[1] ratio_diff = abs(aspect_ratio - target_aspect_ratio) if ratio_diff < best_ratio_diff: best_ratio_diff = ratio_diff best_ratio = ratio elif ratio_diff == best_ratio_diff: if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]: best_ratio = ratio return best_ratio def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False): orig_width, orig_height = image.size aspect_ratio = orig_width / orig_height # calculate the existing image aspect ratio target_ratios = set( (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if i * j <= max_num and i * j >= min_num) target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1]) # find the closest aspect ratio to the target target_aspect_ratio = find_closest_aspect_ratio( aspect_ratio, target_ratios, orig_width, orig_height, image_size) # calculate the target width and height target_width = image_size * target_aspect_ratio[0] target_height = image_size * target_aspect_ratio[1] blocks = target_aspect_ratio[0] * target_aspect_ratio[1] # resize the image resized_img = image.resize((target_width, target_height)) processed_images = [] for i in range(blocks): box = ( (i % (target_width // image_size)) * image_size, (i // (target_width // image_size)) * image_size, ((i % (target_width // image_size)) + 1) * image_size, ((i // (target_width // image_size)) + 1) * image_size ) # split the image split_img = resized_img.crop(box) processed_images.append(split_img) assert len(processed_images) == blocks if use_thumbnail and len(processed_images) != 1: thumbnail_img = image.resize((image_size, image_size)) processed_images.append(thumbnail_img) return processed_images def load_image(image_file, input_size=448, max_num=12): image = Image.open(image_file).convert('RGB') transform = build_transform(input_size=input_size) images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num) pixel_values = [transform(image) for image in images] pixel_values = torch.stack(pixel_values) return pixel_values path = 'OpenGVLab/InternVL3_5-8B' model = AutoModel.from_pretrained( path, torch_dtype=torch.bfloat16, load_in_8bit=False, low_cpu_mem_usage=True, use_flash_attn=True, trust_remote_code=True, device_map="auto").eval() tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False) # set the max number of tiles in `max_num` pixel_values = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda() generation_config = dict(max_new_tokens=1024, do_sample=True) # pure-text conversation (纯文本对话) question = 'Hello, who are you?' response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True) print(f'User: {question}\nAssistant: {response}') question = 'Can you tell me a story?' response, history = model.chat(tokenizer, None, question, generation_config, history=history, return_history=True) print(f'User: {question}\nAssistant: {response}') # single-image single-round conversation (单图单轮对话) question = '<image>\nPlease describe the image shortly.' response = model.chat(tokenizer, pixel_values, question, generation_config) print(f'User: {question}\nAssistant: {response}') # single-image multi-round conversation (单图多轮对话) question = '<image>\nPlease describe the image in detail.' response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True) print(f'User: {question}\nAssistant: {response}') question = 'Please write a poem according to the image.' response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True) print(f'User: {question}\nAssistant: {response}') # multi-image multi-round conversation, combined images (多图多轮对话,拼接图像) pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda() pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda() pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0) question = '<image>\nDescribe the two images in detail.' response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True) print(f'User: {question}\nAssistant: {response}') question = 'What are the similarities and differences between these two images.' response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True) print(f'User: {question}\nAssistant: {response}') # multi-image multi-round conversation, separate images (多图多轮对话,独立图像) pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda() pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda() pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0) num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)] question = 'Image-1: <image>\nImage-2: <image>\nDescribe the two images in detail.' response, history = model.chat(tokenizer, pixel_values, question, generation_config, num_patches_list=num_patches_list, history=None, return_history=True) print(f'User: {question}\nAssistant: {response}') question = 'What are the similarities and differences between these two images.' response, history = model.chat(tokenizer, pixel_values, question, generation_config, num_patches_list=num_patches_list, history=history, return_history=True) print(f'User: {question}\nAssistant: {response}') # batch inference, single image per sample (单图批处理) pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda() pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda() num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)] pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0) questions = ['<image>\nDescribe the image in detail.'] * len(num_patches_list) responses = model.batch_chat(tokenizer, pixel_values, num_patches_list=num_patches_list, questions=questions, generation_config=generation_config) for question, response in zip(questions, responses): print(f'User: {question}\nAssistant: {response}') # video multi-round conversation (视频多轮对话) def get_index(bound, fps, max_frame, first_idx=0, num_segments=32): if bound: start, end = bound[0], bound[1] else: start, end = -100000, 100000 start_idx = max(first_idx, round(start * fps)) end_idx = min(round(end * fps), max_frame) seg_size = float(end_idx - start_idx) / num_segments frame_indices = np.array([ int(start_idx + (seg_size / 2) + np.round(seg_size * idx)) for idx in range(num_segments) ]) return frame_indices def load_video(video_path, bound=None, input_size=448, max_num=1, num_segments=32): vr = VideoReader(video_path, ctx=cpu(0), num_threads=1) max_frame = len(vr) - 1 fps = float(vr.get_avg_fps()) pixel_values_list, num_patches_list = [], [] transform = build_transform(input_size=input_size) frame_indices = get_index(bound, fps, max_frame, first_idx=0, num_segments=num_segments) for frame_index in frame_indices: img = Image.fromarray(vr[frame_index].asnumpy()).convert('RGB') img = dynamic_preprocess(img, image_size=input_size, use_thumbnail=True, max_num=max_num) pixel_values = [transform(tile) for tile in img] pixel_values = torch.stack(pixel_values) num_patches_list.append(pixel_values.shape[0]) pixel_values_list.append(pixel_values) pixel_values = torch.cat(pixel_values_list) return pixel_values, num_patches_list video_path = './examples/red-panda.mp4' pixel_values, num_patches_list = load_video(video_path, num_segments=8, max_num=1) pixel_values = pixel_values.to(torch.bfloat16).cuda() video_prefix = ''.join([f'Frame{i+1}: <image>\n' for i in range(len(num_patches_list))]) question = video_prefix + 'What is the red panda doing?' # Frame1: <image>\nFrame2: <image>\n...\nFrame8: <image>\n{question} response, history = model.chat(tokenizer, pixel_values, question, generation_config, num_patches_list=num_patches_list, history=None, return_history=True) print(f'User: {question}\nAssistant: {response}') question = 'Describe this video in detail.' response, history = model.chat(tokenizer, pixel_values, question, generation_config, num_patches_list=num_patches_list, history=history, return_history=True) print(f'User: {question}\nAssistant: {response}')
03.模型微调
ms-swift已经支持对internvl-3.5系列模型进行训练。ms-swift是魔搭社区官方提供的大模型与多模态大模型训练部署框架。
ms-swift开源地址:
https://github.com/modelscope/ms-swift
下面以internvl-3.5-8B模型为例,展示可运行的微调demo,并给出自定义数据集的格式。
在开始微调之前,请确保您的环境已准备妥当。
# pip install git+https://github.com/modelscope/ms-swift.git git clone https://github.com/modelscope/ms-swift.git cd ms-swift pip install -e . pip install git+https://github.com/huggingface/transformers.git
如果您需要自定义数据集微调模型,你可以将数据准备成以下格式。
{"messages": [{"role": "user", "content": "<image><image>What is the difference between the two images?"}, {"role": "assistant", "content": "The first one is a kitten, and the second one is a puppy."}], "images": ["/xxx/x.jpg", "/xxx/x.png"]}
训练脚本:
# 33G CUDA_VISIBLE_DEVICES=0 \ swift sft \ --model OpenGVLab/InternVL3_5-8B \ --dataset 'AI-ModelScope/LaTeX_OCR:human_handwrite#20000' \ --split_dataset_ratio 0.01 \ --train_type lora \ --torch_dtype bfloat16 \ --num_train_epochs 1 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --learning_rate 1e-4 \ --lora_rank 8 \ --lora_alpha 32 \ --target_modules all-linear \ --freeze_vit true \ --gradient_accumulation_steps 16 \ --eval_steps 50 \ --save_steps 50 \ --save_total_limit 2 \ --logging_steps 5 \ --max_length 4096 \ --output_dir output \ --warmup_ratio 0.05 \ --dataloader_num_workers 4
显存占用
编辑
训练完成后,使用以下命令进行推理:
CUDA_VISIBLE_DEVICES=0 \ swift infer \ --adapters output/vx-xxx/checkpoint-xxx \ --stream true \ --load_data_args true \ --max_new_tokens 2048
推送模型到ModelScope:
swift export \ --adapters output/vx-xxx/checkpoint-xxx \ --push_to_hub true \ --hub_model_id '<your-model-id>' \ --hub_token '<your-sdk-token>'
点击阅读原文,跳转模型合集~
https://www.modelscope.cn/collections/InternVL35-Full-3871e58bf21349