开发者社区 > ModelScope模型即服务 > 正文

ModelScope这个报错咋解决呢?

ModelScope这个报错咋解决呢?2024-06-07 22:29:53,751 - modelscope - INFO - PyTorch version 2.3.0+cpu Found.
2024-06-07 22:29:53,752 - modelscope - INFO - Loading ast index from /mnt/workspace/.cache/modelscope/ast_indexer
2024-06-07 22:29:53,782 - modelscope - INFO - Loading done! Current index file version is 1.15.0, with md5 eccbdad6389abeac8790b84f257211fb and a total number of 980 components indexed
/usr/local/lib/python3.10/site-packages/beartype/_util/error/utilerrwarn.py:67: BeartypeModuleUnimportableWarning: Ignoring module "onnx" importation exception:
ImportError: cannot import name 'builder' from 'google.protobuf.internal' (/usr/local/lib/python3.10/site-packages/google/protobuf/internal/init.py)
warn(message, cls)
/usr/local/lib/python3.10/site-packages/beartype/_util/error/utilerrwarn.py:67: BeartypeModuleUnimportableWarning: Ignoring module "onnx" importation exception:
ImportError: cannot import name 'builder' from 'google.protobuf.internal' (/usr/local/lib/python3.10/site-packages/google/protobuf/internal/init.py)
warn(message, cls)
WARNING: Skipping tensorflow as it is not installed.
WARNING: Skipping tensorflow-estimator as it is not installed.
WARNING: Skipping tensorflow-io-gcs-filesystem as it is not installed.
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Downloading model qwen/Qwen2-7B-Instruct (revision: master) from modelscope
Save model to path /mnt/workspace/.cache/modelscope/hub/qwen/Qwen2-7B-Instruct

convert_config: {'do_dynamic_quantize_convert': False}

engine_config: {'engine_max_length': 8192, 'engine_max_batch': 8, 'do_profiling': False, 'num_threads': 0, 'matmul_precision': 'medium'}

WARNING: Logging before InitGoogleLogging() is written to STDERR
I20240607 22:29:59.388923 4479 thread_pool.h:46] ThreadPool created with: 1
I20240607 22:29:59.389127 4479 as_engine.cpp:232] AllSpark Init with Version: 1.1.0/(GitSha1:1b9b010c-dirty)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
E20240607 22:29:59.579154 4479 as_engine.cpp:927] workers is empty
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████| 4/4 [00:24<00:00, 6.14s/it]
trans model from huggingface model: /mnt/workspace/.cache/modelscope/hub/qwen/Qwen2-7B-Instruct
Dashinfer model will save to ./dashinfer_models/

model_config: {'vocab_size': 152064, 'max_position_embeddings': 32768, 'hidden_size': 3584, 'intermediate_size': 18944, 'num_hidden_layers': 28, 'num_attention_heads': 28, 'use_sliding_window': False, 'sliding_window': 131072, 'max_window_layers': 28, 'num_key_value_heads': 4, 'hidden_act': 'silu', 'initializer_range': 0.02, 'rms_norm_eps': 1e-06, 'use_cache': True, 'rope_theta': 1000000.0, 'attention_dropout': 0.0, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': torch.bfloat16, 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': False, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['Qwen2ForCausalLM'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': 151643, 'pad_token_id': None, 'eos_token_id': 151645, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': '/mnt/workspace/.cache/modelscope/hub/qwen/Qwen2-7B-Instruct', '_commit_hash': None, '_attn_implementation_internal': 'sdpa', 'transformers_version': '4.41.2', 'model_type': 'qwen2', 'use_dynamic_ntk': False, 'use_logn_attn': False, 'rotary_emb_base': 1000000.0, 'size_per_head': 128}

save dimodel to ./dashinfer_models/Qwen2-7B-Instruct_cpu_single_float32.dimodel
save ditensors to ./dashinfer_models/Qwen2-7B-Instruct_cpu_single_float32.ditensors
已杀死

展开
收起
夹心789 2024-06-09 16:46:01 26 0
1 条回答
写回答
取消 提交回答

ModelScope旨在打造下一代开源的模型即服务共享平台,为泛AI开发者提供灵活、易用、低成本的一站式模型服务产品,让模型应用更简单!欢迎加入技术交流群:微信公众号:魔搭ModelScope社区,钉钉群号:44837352

相关电子书

更多
视觉AI能力的开放现状及ModelScope实战 立即下载
ModelScope助力语音AI模型创新与应用 立即下载
低代码开发师(初级)实战教程 立即下载