notebook GPU模式,首次运行实例代码“介绍清华大学”,能成功返回,之后写了个python,让批量生成,一跑就出错,提示GPU内存不够。
Traceback (most recent call last):
File "glm.py", line 63, in
main()
File "glm.py", line 60, in main
generate_and_save_articles(model, input_file, output_dir)
File "glm.py", line 23, in generate_and_save_articles
article = generate_article(model, keyword)
File "glm.py", line 9, in generate_article
result = pipe(inputs)
File "/opt/conda/lib/python3.8/site-packages/modelscope/pipelines/base.py", line 219, in call
output = self._process_single(input, args, kwargs)
File "/opt/conda/lib/python3.8/site-packages/modelscope/pipelines/base.py", line 254, in _process_single
out = self.forward(out, forward_params)
File "/opt/conda/lib/python3.8/site-packages/modelscope/pipelines/nlp/text_generation_pipeline.py", line 274, in forward
return self.model.chat(inputs, self.tokenizer)
File "/opt/conda/lib/python3.8/site-packages/modelscope/models/nlp/chatglm2/text_generation.py", line 1432, in chat
response, history = self._chat(
File "/opt/conda/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(args, kwargs)
File "/opt/conda/lib/python3.8/site-packages/modelscope/models/nlp/chatglm2/text_generation.py", line 1204, in _chat
outputs = self.generate(inputs, gen_kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/generation/utils.py", line 1572, in generate
return self.sample(
File "/opt/conda/lib/python3.8/site-packages/transformers/generation/utils.py", line 2619, in sample
outputs = self(
File "/opt/conda/lib/python3.8/site-packages/modelscope/models/base/base_torch_model.py", line 36, in call
return self.postprocess(self.forward(args, **kwargs))
File "/opt/conda/lib/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(args, kwargs)
File "/opt/conda/lib/python3.8/site-packages/modelscope/models/nlp/chatglm2/text_generation.py", line 1094, in forward
lm_logits = self.transformer.output_layer(hidden_states)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, kwargs)
File "/opt/conda/lib/python3.8/site-packages/accelerate/hooks.py", line 160, in new_forward
args, kwargs = module._hf_hook.pre_forward(module, args, *kwargs)
File "/opt/conda/lib/python3.8/site-packages/accelerate/hooks.py", line 286, in pre_forward
set_module_tensor_to_device(
File "/opt/conda/lib/python3.8/site-packages/accelerate/utils/modeling.py", line 298, in set_module_tensor_to_device
new_value = value.to(device)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 508.00 MiB (GPU 0; 15.90 GiB total capacity; 2.04 GiB already allocated; 494.81 MiB free; 2.05 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF