CIFAR-10 数据集是机器学习领域中一个常用的数据集,主要用于图像分类任务。它包含 60000 张 32x32 彩色图片,分为 10 个类别,每个类别有 6000 张图片。其中,50000 张图片用于训练,10000 张图片用于测试。CIFAR-10 数据集的图像由人工合成,具有固定的分辨率和颜色数量,这使得它成为一个非常适合初学者入门的数据集。
使用 CIFAR-10 数据集进行机器学习的过程如下:
- 数据预处理:首先,需要对数据进行预处理,例如缩放、数据增强等,以提高模型的泛化能力。
- 构建模型:选择合适的神经网络模型,例如卷积神经网络(CNN)或循环神经网络(RNN),用于处理图像数据。
- 训练模型:将预处理后的数据输入到模型中进行训练。在训练过程中,需要不断调整模型的参数,以最小化损失函数。
- 评估模型:使用测试集评估模型的性能,例如准确率、召回率等指标。
- 优化模型:根据评估结果,对模型进行优化,例如调整网络结构、学习率等,以提高模型的性能。
- 应用模型:训练好的模型可以用于实际问题,例如图像分类等。
总之,CIFAR-10 数据集是一个适合初学者入门的图像数据集,可以用于训练和评估机器学习模型。通过对数据进行预处理,选择合适的模型,并进行充分的训练和优化,可以获得较好的分类效果。
Reinforcement learning
The states are previous history of stock prices, current budget, and current number of shares of a stock.
The actions are buy, sell, or hold (i.e. do nothing).
The stock market data comes from the Yahoo Finance library, pip install yahoo-finance.
%matplotlib inline
from yahoo_finance import Share
from matplotlib import pyplot as plt
import numpy as np
import random
import tensorflow as tf
import random
Define an abstract class called DecisionPolicy:
class DecisionPolicy:
def select_action(self, current_state, step):
pass
def update_q(self, state, action, reward, next_state):
pass
Here's one way we could implement the decision policy, called a random decision policy:
class RandomDecisionPolicy(DecisionPolicy):
def __init__(self, actions):
self.actions = actions
def select_action(self, current_state, step):
action = random.choice(self.actions)
return action
That's a good baseline. Now let's use a smarter approach using a neural network:
class QLearningDecisionPolicy(DecisionPolicy):
def __init__(self, actions, input_dim):
self.epsilon = 0.95
self.gamma = 0.3
self.actions = actions
output_dim = len(actions)
h1_dim = 20
self.x = tf.placeholder(tf.float32, [None, input_dim])
self.y = tf.placeholder(tf.float32, [output_dim])
W1 = tf.Variable(tf.random_normal([input_dim, h1_dim]))
b1 = tf.Variable(tf.constant(0.1, shape=[h1_dim]))
h1 = tf.nn.relu(tf.matmul(self.x, W1) + b1)
W2 = tf.Variable(tf.random_normal([h1_dim, output_dim]))
b2 = tf.Variable(tf.constant(0.1, shape=[output_dim]))
self.q = tf.nn.relu(tf.matmul(h1, W2) + b2)
loss = tf.square(self.y - self.q)
self.train_op = tf.train.AdamOptimizer(0.001).minimize(loss)
self.sess = tf.Session()
self.sess.run(tf.global_variables_initializer())
def select_action(self, current_state, step):
threshold = min(self.epsilon, step / 1000.)
if random.random() < threshold:
# Exploit best option with probability epsilon
action_q_vals = self.sess.run(self.q, feed_dict={self.x: current_state})
action_idx = np.argmax(action_q_vals) # TODO: replace w/ tensorflow's argmax
action = self.actions[action_idx]
else:
# Explore random option with probability 1 - epsilon
action = self.actions[random.randint(0, len(self.actions) - 1)]
return action
def update_q(self, state, action, reward, next_state):
action_q_vals = self.sess.run(self.q, feed_dict={self.x: state})
next_action_q_vals = self.sess.run(self.q, feed_dict={self.x: next_state})
next_action_idx = np.argmax(next_action_q_vals)
current_action_idx = self.actions.index(action)
action_q_vals[0, current_action_idx] = reward + self.gamma * next_action_q_vals[0, next_action_idx]
action_q_vals = np.squeeze(np.asarray(action_q_vals))
self.sess.run(self.train_op, feed_dict={self.x: state, self.y: action_q_vals})
Define a function to run a simulation of buying and selling stocks from a market:
def run_simulation(policy, initial_budget, initial_num_stocks, prices, hist, debug=False):
budget = initial_budget
num_stocks = initial_num_stocks
share_value = 0
transitions = list()
for i in range(len(prices) - hist - 1):
if i % 1000 == 0:
print('progress {:.2f}%'.format(float(100*i) / (len(prices) - hist - 1)))
current_state = np.asmatrix(np.hstack((prices[i:i+hist], budget, num_stocks)))
current_portfolio = budget + num_stocks * share_value
action = policy.select_action(current_state, i)
share_value = float(prices[i + hist])
if action == 'Buy' and budget >= share_value:
budget -= share_value
num_stocks += 1
elif action == 'Sell' and num_stocks > 0:
budget += share_value
num_stocks -= 1
else:
action = 'Hold'
new_portfolio = budget + num_stocks * share_value
reward = new_portfolio - current_portfolio
next_state = np.asmatrix(np.hstack((prices[i+1:i+hist+1], budget, num_stocks)))
transitions.append((current_state, action, reward, next_state))
policy.update_q(current_state, action, reward, next_state)
portfolio = budget + num_stocks * share_value
if debug:
print('${}\t{} shares'.format(budget, num_stocks))
return portfolio
We want to run simulations multiple times and average out the performances:
def run_simulations(policy, budget, num_stocks, prices, hist):
num_tries = 5
final_portfolios = list()
for i in range(num_tries):
print('Running simulation {}...'.format(i + 1))
final_portfolio = run_simulation(policy, budget, num_stocks, prices, hist)
final_portfolios.append(final_portfolio)
print('Final portfolio: ${}'.format(final_portfolio))
plt.title('Final Portfolio Value')
plt.xlabel('Simulation #')
plt.ylabel('Net worth')
plt.plot(final_portfolios)
plt.show()
Call the following function to use the Yahoo Finance library and obtain useful stockmarket data.
def get_prices(share_symbol, start_date, end_date, cache_filename='stock_prices.npy'):
try:
stock_prices = np.load(cache_filename)
except IOError:
share = Share(share_symbol)
stock_hist = share.get_historical(start_date, end_date)
stock_prices = [stock_price['Open'] for stock_price in stock_hist]
np.save(cache_filename, stock_prices)
return stock_prices.astype(float)
Who wants to deal with stock market data without looking a pretty plots? No one. So we need this out of law:
def plot_prices(prices):
plt.title('Opening stock prices')
plt.xlabel('day')
plt.ylabel('price ($)')
plt.plot(prices)
plt.savefig('prices.png')
plt.show()
Train a reinforcement learning policy:
if __name__ == '__main__':
prices = get_prices('MSFT', '1992-07-22', '2016-07-22')
plot_prices(prices)
actions = ['Buy', 'Sell', 'Hold']
hist = 3
# policy = RandomDecisionPolicy(actions)
policy = QLearningDecisionPolicy(actions, hist + 2)
budget = 100000.0
num_stocks = 0
run_simulations(policy, budget, num_stocks, prices, hist)
Running simulation 1...
progress 0.00%
progress 16.55%
progress 33.10%
progress 49.64%
progress 66.19%
progress 82.74%
progress 99.29%
Final portfolio: $211337.9906860001
Running simulation 2...
progress 0.00%
progress 16.55%
progress 33.10%
progress 49.64%
progress 66.19%
progress 82.74%
progress 99.29%
Final portfolio: $217982.47320000027
Running simulation 3...
progress 0.00%
progress 16.55%
progress 33.10%
progress 49.64%
progress 66.19%
progress 82.74%
progress 99.29%
Final portfolio: $218393.6024270001
Running simulation 4...
progress 0.00%
progress 16.55%
progress 33.10%
progress 49.64%
progress 66.19%
progress 82.74%
progress 99.29%
Final portfolio: $217917.51057200006
Running simulation 5...
progress 0.00%
progress 16.55%
progress 33.10%
progress 49.64%
progress 66.19%
progress 82.74%
progress 99.29%
Final portfolio: $219461.85500000033