多流向算法GPU并行化

简介:

 As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preproces- sing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumula- tions on a GPU performs much faster than either sequential algorithms or other parallel GPU-based algorithms based on existing parallelization strategies.

相关实践学习
部署Stable Diffusion玩转AI绘画(GPU云服务器)
本实验通过在ECS上从零开始部署Stable Diffusion来进行AI绘画创作,开启AIGC盲盒。
目录
相关文章
|
缓存 openCL 算法
关于实现Halcon算法加速的基础知识(2)(多核并行/GPU)
关于实现Halcon算法加速的基础知识(多核并行/GPU)
3171 0
关于实现Halcon算法加速的基础知识(2)(多核并行/GPU)
|
机器学习/深度学习 并行计算 算法
机器学习算法对GPU的要求分析
简单介绍做机器学习算法的厂家对GPU的要求
508 1
|
机器学习/深度学习 存储 人工智能
7 Papers | DeepMind用AI重写排序算法;将33B大模型塞进单个消费级GPU
7 Papers | DeepMind用AI重写排序算法;将33B大模型塞进单个消费级GPU
439 0
|
算法 PyTorch 算法框架/工具
绕开算力限制,如何用单GPU微调 LLM?这是一份「梯度累积」算法教程(2)
绕开算力限制,如何用单GPU微调 LLM?这是一份「梯度累积」算法教程
249 0
|
算法 PyTorch 算法框架/工具
绕开算力限制,如何用单GPU微调 LLM?这是一份「梯度累积」算法教程
绕开算力限制,如何用单GPU微调 LLM?这是一份「梯度累积」算法教程
195 0
|
算法 安全 程序员
关于实现Halcon算法加速的基础知识(1)(多核并行/GPU)
关于实现Halcon算法加速的基础知识(多核并行/GPU)
1269 0
关于实现Halcon算法加速的基础知识(1)(多核并行/GPU)
|
算法 异构计算
ML之catboost:基于自定义数据集利用catboost 算法实现回归预测(训练采用CPU和GPU两种方式)
ML之catboost:基于自定义数据集利用catboost 算法实现回归预测(训练采用CPU和GPU两种方式)
|
机器学习/深度学习 算法 安全
DL:神经网络算法简介之耗算力的简介、原因、经典模型耗算力计算、GPU使用之详细攻略
DL:神经网络算法简介之耗算力的简介、原因、经典模型耗算力计算、GPU使用之详细攻略
DL:神经网络算法简介之耗算力的简介、原因、经典模型耗算力计算、GPU使用之详细攻略
|
机器学习/深度学习 算法 TensorFlow
进化算法可以不再需要计算集群,开普敦大学的新方法用一块GPU也能刷新MNIST记录
他们实验中只使用了一块GTX1070 GPU,训练时间6到24小时,就可以取得这样的成果,他们觉得非常满意。他们的研究也首次尝试了把神经进化用在一维卷积网络的创造中,用来解决情感分析、包括嵌入层的优化问题。
1642 0
|
机器学习/深度学习 算法 C++
【C/C++学院】0829-位容器multimapmutisetString/算法函数兰不达表达式以及类重载/GPU编程
<p></p> <h2> <span style="font-family:宋体; font-size:16pt">位容器<span style="font-family:Cambria">multimapmutisetString</span></span><span style="font-family:宋体; font-size:16pt"></span> </h2> <p></
1701 0

热门文章

最新文章