已解决yolov5报错RuntimeError: CUDA out of memory. Tried to allocate 14.00 MiB

简介: 已解决yolov5报错RuntimeError: CUDA out of memory. Tried to allocate 14.00 MiB

问题

RuntimeError: CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 4.00 GiB total capacity; 2.34 GiB already allocated; 13.70 MiB free; 2.41 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

解决方法

GPU内存不足,找到train.py文件下的parse_opt函数

将函数中的batch_size调低即可(箭头所指的地方),具体调到多少需要自行测试。

目录
相关文章
|
并行计算 PyTorch Linux
大概率(5重方法)解决RuntimeError: CUDA out of memory. Tried to allocate ... MiB
大概率(5重方法)解决RuntimeError: CUDA out of memory. Tried to allocate ... MiB
4315 0
|
8月前
|
并行计算 监控 前端开发
函数计算操作报错合集之如何解决报错:RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0!
在使用函数计算服务(如阿里云函数计算)时,用户可能会遇到多种错误场景。以下是一些常见的操作报错及其可能的原因和解决方法,包括但不限于:1. 函数部署失败、2. 函数执行超时、3. 资源不足错误、4. 权限与访问错误、5. 依赖问题、6. 网络配置错误、7. 触发器配置错误、8. 日志与监控问题。
329 2
|
9月前
|
机器学习/深度学习 并行计算 PyTorch
【已解决】RuntimeError: CUDA error: device-side assert triggeredCUDA kernel errors might be asynchronous
【已解决】RuntimeError: CUDA error: device-side assert triggeredCUDA kernel errors might be asynchronous
|
9月前
|
并行计算 PyTorch 算法框架/工具
【已解决】RuntimeError: CuDA error: no kernel image is available for execution on the device
【已解决】RuntimeError: CuDA error: no kernel image is available for execution on the device
|
机器学习/深度学习 PyTorch 算法框架/工具
解决RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cp
对应两种保存模型的方式,pytorch也有两种加载模型的方式。对应第一种保存方式,加载模型时通过torch.load(‘.pth’)直接初始化新的神经网络对象;对应第二种保存方式,需要首先导入对应的网络,再通过net.load_state_dict(torch.load(‘.pth’))完成模型参数的加载。
2195 0
|
异构计算
解决 RuntimeError: Tensor for 'out' is on CPU, Tensor for argument #1 'self' is on CPU, but expected them to be on GPU (while checking arguments for addmm)
程序报错:RuntimeError: Tensor for 'out' is on CPU, Tensor for argument #1 'self' is on CPU, but expected them to be on GPU (while checking arguments for addmm) 这个错误提示表明你使用了一个在CPU上的张量与一个在GPU上的张量进行了操作,导致了数据类型不匹配的错误。一种解决方法是将所有的张量都放到同一个设备上进行计算,可以使用 to() 方法来实现:
1111 0
|
并行计算 异构计算
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is Fal
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is Fal
239 0
|
计算机视觉 异构计算
编译OpenCV:nvcc fatal : Unsupported gpu architecture 'compute_75'
编译OpenCV:nvcc fatal : Unsupported gpu architecture 'compute_75'
255 0
|
TensorFlow 算法框架/工具 Python
成功解决Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
成功解决Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
成功解决Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
|
计算机视觉 异构计算
编译OpenCV:nvcc fatal : Unsupported gpu architecture 'compute_75'
编译OpenCV:nvcc fatal : Unsupported gpu architecture 'compute_75'
298 0