图像处理之双边滤波效果(Bilateral Filtering for Gray and Color Image)

简介: 图像处理之双边滤波效果(Bilateral Filtering for Gray and Color Image) 基本介绍: 普通的时空域的低通滤波器,在像素空间完成滤波以后,导致图像的边缘部分也变得不那么明显, 整张图像都变得同样的模糊,图像边缘细节丢失。

图像处理之双边滤波效果(Bilateral Filtering for Gray and Color Image)

基本介绍:

普通的时空域的低通滤波器,在像素空间完成滤波以后,导致图像的边缘部分也变得不那么明显,

整张图像都变得同样的模糊,图像边缘细节丢失。双边滤波器(ABilateral Filter)可以很好的保

留边缘的同时消除噪声。双边滤波器能做到这些原因在于它不像普通的高斯/卷积低通滤波,只考

虑了位置对中心像素的影响,它还考虑了卷积核中像素与中心像素之间相似程度的影响,根据位置

影响与像素值之间的相似程度生成两个不同的权重表(WeightTable),在计算中心像素的时候加以

考虑这两个权重,从而实现双边低通滤波。据说AdobePhotoshop的高斯磨皮功能就是应用了

双边低通滤波算法实现。




程序效果:


看我们的美女lena应用双边滤镜之后



程序关键代码解释:

建立距离高斯权重表(Weight Table)如下:

private void buildDistanceWeightTable() {
	int size = 2 * radius + 1;
	cWeightTable = new double[size][size];
	for(int semirow = -radius; semirow <= radius; semirow++) {
		for(int semicol = - radius; semicol <= radius; semicol++) {
			// calculate Euclidean distance between center point and close pixels
			double delta = Math.sqrt(semirow * semirow + semicol * semicol)/ds;
			double deltaDelta = delta * delta;
			cWeightTable[semirow+radius][semicol+radius] = Math.exp(deltaDelta * factor);
		}
	}
}
建立RGB值像素度高斯权重代码如下:

private void buildSimilarityWeightTable() {
	sWeightTable = new double[256]; // since the color scope is 0 ~ 255
	for(int i=0; i<256; i++) {
		double delta = Math.sqrt(i * i ) / rs;
		double deltaDelta = delta * delta;
		sWeightTable[i] = Math.exp(deltaDelta * factor);
	}
}
完成权重和计算与像素×权重和计算代码如下:

for(int semirow = -radius; semirow <= radius; semirow++) {
	for(int semicol = - radius; semicol <= radius; semicol++) {
		if((row + semirow) >= 0 && (row + semirow) < height) {
			rowOffset = row + semirow;
		} else {
			rowOffset = 0;
		}
		
		if((semicol + col) >= 0 && (semicol + col) < width) {
			colOffset = col + semicol;
		} else {
			colOffset = 0;
		}
		index2 = rowOffset * width + colOffset;
		ta2 = (inPixels[index2] >> 24) & 0xff;
        tr2 = (inPixels[index2] >> 16) & 0xff;
        tg2 = (inPixels[index2] >> 8) & 0xff;
        tb2 = inPixels[index2] & 0xff;
        
        csRedWeight = cWeightTable[semirow+radius][semicol+radius]  * sWeightTable[(Math.abs(tr2 - tr))];
        csGreenWeight = cWeightTable[semirow+radius][semicol+radius]  * sWeightTable[(Math.abs(tg2 - tg))];
        csBlueWeight = cWeightTable[semirow+radius][semicol+radius]  * sWeightTable[(Math.abs(tb2 - tb))];
        
        csSumRedWeight += csRedWeight;
        csSumGreenWeight += csGreenWeight;
        csSumBlueWeight += csBlueWeight;
        redSum += (csRedWeight * (double)tr2);
        greenSum += (csGreenWeight * (double)tg2);
        blueSum += (csBlueWeight * (double)tb2);
	}
}
完成归一化,得到输出像素点RGB值得代码如下:

tr = (int)Math.floor(redSum / csSumRedWeight);
tg = (int)Math.floor(greenSum / csSumGreenWeight);
tb = (int)Math.floor(blueSum / csSumBlueWeight);
outPixels[index] = (ta << 24) | (clamp(tr) << 16) | (clamp(tg) << 8) | clamp(tb);
关于什么卷积滤波,请参考:

http://blog.csdn.net/jia20003/article/details/7038938

关于高斯模糊算法,请参考:
http://blog.csdn.net/jia20003/article/details/7234741
最后想说,不给出源代码的博文不是好博文,基于Java完成的双边滤波速度有点慢

可以自己优化,双边滤镜完全源代码如下:

package com.gloomyfish.blurring.study;
/**
 *  A simple and important case of bilateral filtering is shift-invariant Gaussian filtering
 *  refer to - http://graphics.ucsd.edu/~iman/Denoising/
 *  refer to - http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/MANDUCHI1/Bilateral_Filtering.html
 *  thanks to cyber
 */
import java.awt.image.BufferedImage;

public class BilateralFilter extends AbstractBufferedImageOp {
	private final static double factor = -0.5d;
	private double ds; // distance sigma
	private double rs; // range sigma
	private int radius; // half length of Gaussian kernel Adobe Photoshop 
	private double[][] cWeightTable;
	private double[] sWeightTable;
	private int width;
	private int height;
	
	public BilateralFilter() {
		this.ds = 1.0f;
		this.rs = 1.0f;
	}
	
	private void buildDistanceWeightTable() {
		int size = 2 * radius + 1;
		cWeightTable = new double[size][size];
		for(int semirow = -radius; semirow <= radius; semirow++) {
			for(int semicol = - radius; semicol <= radius; semicol++) {
				// calculate Euclidean distance between center point and close pixels
				double delta = Math.sqrt(semirow * semirow + semicol * semicol)/ds;
				double deltaDelta = delta * delta;
				cWeightTable[semirow+radius][semicol+radius] = Math.exp(deltaDelta * factor);
			}
		}
	}
	
	/**
	 * for gray image
	 * @param row
	 * @param col
	 * @param inPixels
	 */
	private void buildSimilarityWeightTable() {
		sWeightTable = new double[256]; // since the color scope is 0 ~ 255
		for(int i=0; i<256; i++) {
			double delta = Math.sqrt(i * i ) / rs;
			double deltaDelta = delta * delta;
			sWeightTable[i] = Math.exp(deltaDelta * factor);
		}
	}
	
	public void setDistanceSigma(double ds) {
		this.ds = ds;
	}
	
	public void setRangeSigma(double rs) {
		this.rs = rs;
	}

	@Override
	public BufferedImage filter(BufferedImage src, BufferedImage dest) {
		width = src.getWidth();
        height = src.getHeight();
        //int sigmaMax = (int)Math.max(ds, rs);
        //radius = (int)Math.ceil(2 * sigmaMax);
        radius = (int)Math.max(ds, rs);
        buildDistanceWeightTable();
        buildSimilarityWeightTable();
        if ( dest == null )
        	dest = createCompatibleDestImage( src, null );

        int[] inPixels = new int[width*height];
        int[] outPixels = new int[width*height];
        getRGB( src, 0, 0, width, height, inPixels );
        int index = 0;
		double redSum = 0, greenSum = 0, blueSum = 0;
		double csRedWeight = 0, csGreenWeight = 0, csBlueWeight = 0;
		double csSumRedWeight = 0, csSumGreenWeight = 0, csSumBlueWeight = 0;
        for(int row=0; row<height; row++) {
        	int ta = 0, tr = 0, tg = 0, tb = 0;
        	for(int col=0; col<width; col++) {
        		index = row * width + col;
        		ta = (inPixels[index] >> 24) & 0xff;
                tr = (inPixels[index] >> 16) & 0xff;
                tg = (inPixels[index] >> 8) & 0xff;
                tb = inPixels[index] & 0xff;
                int rowOffset = 0, colOffset = 0;
                int index2 = 0;
                int ta2 = 0, tr2 = 0, tg2 = 0, tb2 = 0;
        		for(int semirow = -radius; semirow <= radius; semirow++) {
        			for(int semicol = - radius; semicol <= radius; semicol++) {
        				if((row + semirow) >= 0 && (row + semirow) < height) {
        					rowOffset = row + semirow;
        				} else {
        					rowOffset = 0;
        				}
        				
        				if((semicol + col) >= 0 && (semicol + col) < width) {
        					colOffset = col + semicol;
        				} else {
        					colOffset = 0;
        				}
        				index2 = rowOffset * width + colOffset;
        				ta2 = (inPixels[index2] >> 24) & 0xff;
        		        tr2 = (inPixels[index2] >> 16) & 0xff;
        		        tg2 = (inPixels[index2] >> 8) & 0xff;
        		        tb2 = inPixels[index2] & 0xff;
        		        
        		        csRedWeight = cWeightTable[semirow+radius][semicol+radius]  * sWeightTable[(Math.abs(tr2 - tr))];
        		        csGreenWeight = cWeightTable[semirow+radius][semicol+radius]  * sWeightTable[(Math.abs(tg2 - tg))];
        		        csBlueWeight = cWeightTable[semirow+radius][semicol+radius]  * sWeightTable[(Math.abs(tb2 - tb))];
        		        
        		        csSumRedWeight += csRedWeight;
        		        csSumGreenWeight += csGreenWeight;
        		        csSumBlueWeight += csBlueWeight;
        		        redSum += (csRedWeight * (double)tr2);
        		        greenSum += (csGreenWeight * (double)tg2);
        		        blueSum += (csBlueWeight * (double)tb2);
        			}
        		}
        		
				tr = (int)Math.floor(redSum / csSumRedWeight);
				tg = (int)Math.floor(greenSum / csSumGreenWeight);
				tb = (int)Math.floor(blueSum / csSumBlueWeight);
				outPixels[index] = (ta << 24) | (clamp(tr) << 16) | (clamp(tg) << 8) | clamp(tb);
                
                // clean value for next time...
                redSum = greenSum = blueSum = 0;
                csRedWeight = csGreenWeight = csBlueWeight = 0;
                csSumRedWeight = csSumGreenWeight = csSumBlueWeight = 0;
                
        	}
        }
        setRGB( dest, 0, 0, width, height, outPixels );
        return dest;
	}
	
	public static int clamp(int p) {
		return p < 0 ? 0 : ((p > 255) ? 255 : p);
	}

	public static void main(String[] args) {
		BilateralFilter bf = new BilateralFilter();
		bf.buildSimilarityWeightTable();
	}
}

感觉不错,请顶一下!

转载文章请务必注明出自本博客




目录
相关文章
|
SQL 关系型数据库 MySQL
MySQL唯一约束(UNIQUE KEY)
MySQL唯一约束(UNIQUE KEY)
749 0
|
机器学习/深度学习 移动开发 JavaScript
ZC序列理论学习及仿真(一)
ZC序列理论学习及仿真
2541 0
|
机器学习/深度学习 人工智能 数据可视化
ShuffleNet:极致轻量化卷积神经网络(分组卷积+通道重排)
我们引入了一个高效计算的CNN结构名字叫做shuffleNet,这个结构被设计用来解决部署算力非常有限的移动设备问题,这个新的结构使用了两个新的操作,pointwise group convolution 和 channel shuffle能够在极大减少计算量的同时保持一定的精度。我们在ImageNet classification和MS COCO目标检测数据集上做实验论证了ShuffleNet和其他的结构相比有着很好的性能。比如,相比于mobilenet,shufflenet在ImageNet 分类任务上有着更低的top-1错误率(错误率是7.8%)需要的计算量为40MFLOPs。在一个AR
3351 0
ShuffleNet:极致轻量化卷积神经网络(分组卷积+通道重排)
|
7月前
|
机器学习/深度学习 JavaScript PyTorch
9个主流GAN损失函数的数学原理和Pytorch代码实现:从经典模型到现代变体
生成对抗网络(GAN)的训练效果高度依赖于损失函数的选择。本文介绍了经典GAN损失函数理论,并用PyTorch实现多种变体,包括原始GAN、LS-GAN、WGAN及WGAN-GP等。通过分析其原理与优劣,如LS-GAN提升训练稳定性、WGAN-GP改善图像质量,展示了不同场景下损失函数的设计思路。代码实现覆盖生成器与判别器的核心逻辑,为实际应用提供了重要参考。未来可探索组合优化与自适应设计以提升性能。
523 7
9个主流GAN损失函数的数学原理和Pytorch代码实现:从经典模型到现代变体
|
11月前
|
数据采集 数据可视化 数据挖掘
利用Python进行数据分析:Pandas库实战指南
利用Python进行数据分析:Pandas库实战指南
|
10月前
|
机器学习/深度学习 存储 人工智能
【AI系统】轻量级CNN模型综述
本文介绍了几种常见的小型化CNN模型,包括SqueezeNet、ShuffleNet、MobileNet等系列。这些模型通过减少参数量和计算量,实现在有限资源下高效运行,适用于存储和算力受限的场景。文章详细解释了各模型的核心技术和优化策略,如Fire Module、Channel Shuffle、Depthwise Separable Convolutions等,旨在帮助读者理解和应用这些高效的小型化CNN模型。
479 3
|
机器学习/深度学习 数据采集 网络安全
使用Python实现深度学习模型:智能网络安全威胁检测
使用Python实现深度学习模型:智能网络安全威胁检测
661 6
|
存储 Kubernetes 监控
etcd:分布式键值存储系统技术
`etcd` 是一个用于共享配置和服务发现的高度可用键值存储系统,基于Raft算法保证数据一致性。它提供HTTP/GRPC API,常用于服务发现、配置共享和分布式锁。etcd集群包含多个节点,每个节点可为领导者或跟随者。在Kubernetes中,etcd存储集群状态,其稳定性和一致性至关重要。维护etcd涉及备份、状态监控、日志审计和安全措施。
582 2
|
JavaScript Java 测试技术
基于SpringBoot+Vue的高校体育运动会比赛系统的详细设计和实现(源码+lw+部署文档+讲解等)
基于SpringBoot+Vue的高校体育运动会比赛系统的详细设计和实现(源码+lw+部署文档+讲解等)
170 0
|
机器学习/深度学习 文件存储 算法框架/工具
【YOLOv8改进- Backbone主干】2024最新轻量化网络MobileNetV4替换YoloV8的BackBone
YOLO目标检测专栏聚焦于模型的改进和实战应用,介绍了MobileNetV4,它在移动设备上优化了架构。文章提到了UIB(通用反向瓶颈)模块,结合了多种结构,增强了特征提取;Mobile MQA是专为移动平台设计的注意力层,提升了速度;优化的NAS提升了搜索效率。通过这些创新,MNv4在不同硬件上实现了性能和效率的平衡,且通过蒸馏技术提高了准确性。模型在Pixel 8 EdgeTPU上达到87%的ImageNet-1K准确率,延迟仅为3.8ms。论文、PyTorch和TensorFlow实现代码链接也已提供。