Compute Mean Value of Train and Test Dataset of Caltech-256 dataset in matlab code

简介: Compute Mean Value of Train and Test Dataset of Caltech-256 dataset in matlab code    clc;imPath = '/home/wangxiao/Downloads/Link to caltech_256_...

 

Compute Mean Value of Train and Test Dataset of Caltech-256 dataset in matlab code 

 

clc;
imPath = '/home/wangxiao/Downloads/Link to caltech_256_dataset/image_/ori_total_im_/';
imageFiles = dir(imPath);

train_txtFile = '/home/wangxiao/Downloads/caltech256_whole_data_/train_caltech_label.txt';
test_txtFile = '/home/wangxiao/Downloads/caltech256_whole_data_/test_caltech_label.txt';
train_list = importdata(train_txtFile);
test_list = importdata(test_txtFile);

train_R = 0; train_G = 0; train_B = 0;
test_R = 0; test_G = 0; test_B = 0;

for i = 1:size(train_list, 1)
train_im_name = train_list.textdata{i, 1} ;
train_image = imread([imPath, train_im_name]);
train_image = double(train_image);

train_R = train_R + mean(mean( train_image(:, :, 1) ));
train_G = train_G +mean(mean( train_image(:, :, 2) ));
train_B = train_B + mean(mean( train_image(:, :, 3) ));

end

for i = 1:size(test_list, 1)
test_im_name = test_list.textdata{i, 1} ;
test_image = imread([imPath, test_im_name]);
% imshow(test_image);
test_image = double(test_image);

test_R = test_R +mean(mean( test_image(:, :, 1) )) ;
test_G = test_G +mean(mean( test_image(:, :, 2) )) ;
test_B = test_B +mean(mean( test_image(:, :, 3) )) ;

end

 

mean_train_R = train_R / size(train_list, 1);
mean_train_G = train_G / size(train_list, 1);
mean_train_B = train_B / size(train_list, 1);

mean_test_R = test_R / size(test_list, 1);
mean_test_G = test_G / size(test_list, 1);
mean_test_B = test_B / size(test_list, 1);

 

 

 

 

 

 

 

相关文章
|
3月前
|
TensorFlow 算法框架/工具
【Tensorflow+Keras】tf.keras.backend.image_data_format()的解析与举例使用
介绍了TensorFlow和Keras中tf.keras.backend.image_data_format()函数的用法。
46 5
|
3月前
|
数据采集 存储 缓存
【Python-Tensorflow】tf.data.Dataset的解析与使用
本文详细介绍了TensorFlow中`tf.data.Dataset`类的使用,包括创建数据集的方法(如`from_generator()`、`from_tensor_slices()`、`from_tensors()`)、数据集函数(如`apply()`、`as_numpy_iterator()`、`batch()`、`cache()`等),以及如何通过这些函数进行高效的数据预处理和操作。
52 7
|
3月前
|
机器学习/深度学习 TensorFlow 算法框架/工具
【Tensorflow+keras】解决cuDNN launch failure : input shape ([32,2,8,8]) [[{{node sequential_1/batch_nor
在使用TensorFlow 2.0和Keras训练生成对抗网络(GAN)时,遇到了“cuDNN launch failure”错误,特别是在调用self.generator.predict方法时出现,输入形状为([32,2,8,8])。此问题可能源于输入数据形状与模型期望的形状不匹配或cuDNN版本不兼容。解决方案包括设置GPU内存增长、检查模型定义和输入数据形状、以及确保TensorFlow和cuDNN版本兼容。
38 1
|
3月前
|
TensorFlow API 算法框架/工具
【Tensorflow+keras】解决使用model.load_weights时报错 ‘str‘ object has no attribute ‘decode‘
python 3.6,Tensorflow 2.0,在使用Tensorflow 的keras API,加载权重模型时,报错’str’ object has no attribute ‘decode’
48 0
|
API 数据格式
TensorFlow2._:model.summary() Output Shape为multiple解决方法
TensorFlow2._:model.summary() Output Shape为multiple解决方法
274 0
TensorFlow2._:model.summary() Output Shape为multiple解决方法
|
Serverless
train_test_split.py代码解释
这段代码用于将MovieLens 1M数据集的评分数据划分为训练集和测试集。 • 首先,使用Path库获取当前文件的父级目录,也就是项目根目录。 • 接着,定义输出训练集和测试集文件的路径。
166 0
|
Python
《Data Pre-Processing in PythonHow I learned to love parallelized applies with Dask and Numba》电子版地址
Data Pre-Processing in Python:How I learned to love parallelized applies with Dask and Numba
78 0
《Data Pre-Processing in PythonHow I learned to love parallelized applies with Dask and Numba》电子版地址
from sklearn.cross_validation import train_test_split发生报错
from sklearn.cross_validation import train_test_split发生报错
263 0
from sklearn.cross_validation import train_test_split发生报错
成功解决xgboost\core.py:614: UserWarning: Use subset (sliced data) of np.ndarray is not recommended beca
成功解决xgboost\core.py:614: UserWarning: Use subset (sliced data) of np.ndarray is not recommended beca
成功解决sklearn\preprocessing\label.py:151: DeprecationWarning: The truth value of an empty array is amb
成功解决sklearn\preprocessing\label.py:151: DeprecationWarning: The truth value of an empty array is amb