机器学习:numpy练习题

简介: 机器学习:numpy练习题

机器学习:numpy练习题

1. 从数组 a=np.arange(15)提取 5 到 10 之间的所有元素

import numpy as np
a = np.arange(15)
print(a[(a<=10) & (a>=5)])
[ 5  6  7  8  9 10]

2. 将数组 np.arange(20)转变为 4 行 5 列的二维数组,并执行交换第 1 行和第 2 行,交换第 1 列和第 2 列。

b = np.arange(20).reshape(4,5)
print(b)
b = b[[1,0,2,3],:]
print(b)
print(b[:,[1,0,2,3]])
[[ 0  1  2  3  4]
 [ 5  6  7  8  9]
 [10 11 12 13 14]
 [15 16 17 18 19]]
[[ 6  5  7  8]
 [ 1  0  2  3]
 [11 10 12 13]
 [16 15 17 18]]

3. 寻找数组 np.random.randint(1,10,size=(5,5))中所有的奇数,并将所有奇数替换为 0.

c = np.random.randint(1,10,size=(5,5))
print(c)
c[c%2!=0]=0
print(c)
[[8 3 4 3 5]
 [4 7 1 7 9]
 [6 5 2 1 2]
 [3 7 8 4 2]
 [8 4 3 6 4]]
[[8 0 4 0 0]
 [4 0 0 0 0]
 [6 0 2 0 2]
 [0 0 8 4 2]
 [8 4 0 6 4]]

4. 从1—50之间的均匀地产生随机数字20个,存储数组 a 中,替换大于等于 30 数为 0,并获取给定数组 a 中前 5 个最大值的位置。

d = np.random.randint(1,50,size=(20))
print(d)
d[d>=30]=0
print(d)
d=d.argsort()[::-1][0:5]
print(d)
[29 40  4 43 27 23  1 44 15 28 22 14 38 17 34 16 35 11  1 32]
[29  0  4  0 27 23  1  0 15 28 22 14  0 17  0 16  0 11  1  0]
[ 0  9  4  5 10]

5. 利用下列方式获取数组 iris_2d

url="https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"

iris_2d=np.genfromtxt(url, delimiter=',', dtype= 'float')

url="https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"

iris_2d=np.genfromtxt(url, delimiter=',', dtype= 'float')
print(iris_2d)
[[5.1 3.5 1.4 0.2 nan]
 [4.9 3.  1.4 0.2 nan]
 [4.7 3.2 1.3 0.2 nan]
 [4.6 3.1 1.5 0.2 nan]
 [5.  3.6 1.4 0.2 nan]
 [5.4 3.9 1.7 0.4 nan]
 [4.6 3.4 1.4 0.3 nan]
 [5.  3.4 1.5 0.2 nan]
 [4.4 2.9 1.4 0.2 nan]
 [4.9 3.1 1.5 0.1 nan]
 [5.4 3.7 1.5 0.2 nan]
 [4.8 3.4 1.6 0.2 nan]
 [4.8 3.  1.4 0.1 nan]
 [4.3 3.  1.1 0.1 nan]
 [5.8 4.  1.2 0.2 nan]
 [5.7 4.4 1.5 0.4 nan]
 [5.4 3.9 1.3 0.4 nan]
 [5.1 3.5 1.4 0.3 nan]
 [5.7 3.8 1.7 0.3 nan]
 [5.1 3.8 1.5 0.3 nan]
 [5.4 3.4 1.7 0.2 nan]
 [5.1 3.7 1.5 0.4 nan]
 [4.6 3.6 1.  0.2 nan]
 [5.1 3.3 1.7 0.5 nan]
 [4.8 3.4 1.9 0.2 nan]
 [5.  3.  1.6 0.2 nan]
 [5.  3.4 1.6 0.4 nan]
 [5.2 3.5 1.5 0.2 nan]
 [5.2 3.4 1.4 0.2 nan]
 [4.7 3.2 1.6 0.2 nan]
 [4.8 3.1 1.6 0.2 nan]
 [5.4 3.4 1.5 0.4 nan]
 [5.2 4.1 1.5 0.1 nan]
 [5.5 4.2 1.4 0.2 nan]
 [4.9 3.1 1.5 0.1 nan]
 [5.  3.2 1.2 0.2 nan]
 [5.5 3.5 1.3 0.2 nan]
 [4.9 3.1 1.5 0.1 nan]
 [4.4 3.  1.3 0.2 nan]
 [5.1 3.4 1.5 0.2 nan]
 [5.  3.5 1.3 0.3 nan]
 [4.5 2.3 1.3 0.3 nan]
 [4.4 3.2 1.3 0.2 nan]
 [5.  3.5 1.6 0.6 nan]
 [5.1 3.8 1.9 0.4 nan]
 [4.8 3.  1.4 0.3 nan]
 [5.1 3.8 1.6 0.2 nan]
 [4.6 3.2 1.4 0.2 nan]
 [5.3 3.7 1.5 0.2 nan]
 [5.  3.3 1.4 0.2 nan]
 [7.  3.2 4.7 1.4 nan]
 [6.4 3.2 4.5 1.5 nan]
 [6.9 3.1 4.9 1.5 nan]
 [5.5 2.3 4.  1.3 nan]
 [6.5 2.8 4.6 1.5 nan]
 [5.7 2.8 4.5 1.3 nan]
 [6.3 3.3 4.7 1.6 nan]
 [4.9 2.4 3.3 1.  nan]
 [6.6 2.9 4.6 1.3 nan]
 [5.2 2.7 3.9 1.4 nan]
 [5.  2.  3.5 1.  nan]
 [5.9 3.  4.2 1.5 nan]
 [6.  2.2 4.  1.  nan]
 [6.1 2.9 4.7 1.4 nan]
 [5.6 2.9 3.6 1.3 nan]
 [6.7 3.1 4.4 1.4 nan]
 [5.6 3.  4.5 1.5 nan]
 [5.8 2.7 4.1 1.  nan]
 [6.2 2.2 4.5 1.5 nan]
 [5.6 2.5 3.9 1.1 nan]
 [5.9 3.2 4.8 1.8 nan]
 [6.1 2.8 4.  1.3 nan]
 [6.3 2.5 4.9 1.5 nan]
 [6.1 2.8 4.7 1.2 nan]
 [6.4 2.9 4.3 1.3 nan]
 [6.6 3.  4.4 1.4 nan]
 [6.8 2.8 4.8 1.4 nan]
 [6.7 3.  5.  1.7 nan]
 [6.  2.9 4.5 1.5 nan]
 [5.7 2.6 3.5 1.  nan]
 [5.5 2.4 3.8 1.1 nan]
 [5.5 2.4 3.7 1.  nan]
 [5.8 2.7 3.9 1.2 nan]
 [6.  2.7 5.1 1.6 nan]
 [5.4 3.  4.5 1.5 nan]
 [6.  3.4 4.5 1.6 nan]
 [6.7 3.1 4.7 1.5 nan]
 [6.3 2.3 4.4 1.3 nan]
 [5.6 3.  4.1 1.3 nan]
 [5.5 2.5 4.  1.3 nan]
 [5.5 2.6 4.4 1.2 nan]
 [6.1 3.  4.6 1.4 nan]
 [5.8 2.6 4.  1.2 nan]
 [5.  2.3 3.3 1.  nan]
 [5.6 2.7 4.2 1.3 nan]
 [5.7 3.  4.2 1.2 nan]
 [5.7 2.9 4.2 1.3 nan]
 [6.2 2.9 4.3 1.3 nan]
 [5.1 2.5 3.  1.1 nan]
 [5.7 2.8 4.1 1.3 nan]
 [6.3 3.3 6.  2.5 nan]
 [5.8 2.7 5.1 1.9 nan]
 [7.1 3.  5.9 2.1 nan]
 [6.3 2.9 5.6 1.8 nan]
 [6.5 3.  5.8 2.2 nan]
 [7.6 3.  6.6 2.1 nan]
 [4.9 2.5 4.5 1.7 nan]
 [7.3 2.9 6.3 1.8 nan]
 [6.7 2.5 5.8 1.8 nan]
 [7.2 3.6 6.1 2.5 nan]
 [6.5 3.2 5.1 2.  nan]
 [6.4 2.7 5.3 1.9 nan]
 [6.8 3.  5.5 2.1 nan]
 [5.7 2.5 5.  2.  nan]
 [5.8 2.8 5.1 2.4 nan]
 [6.4 3.2 5.3 2.3 nan]
 [6.5 3.  5.5 1.8 nan]
 [7.7 3.8 6.7 2.2 nan]
 [7.7 2.6 6.9 2.3 nan]
 [6.  2.2 5.  1.5 nan]
 [6.9 3.2 5.7 2.3 nan]
 [5.6 2.8 4.9 2.  nan]
 [7.7 2.8 6.7 2.  nan]
 [6.3 2.7 4.9 1.8 nan]
 [6.7 3.3 5.7 2.1 nan]
 [7.2 3.2 6.  1.8 nan]
 [6.2 2.8 4.8 1.8 nan]
 [6.1 3.  4.9 1.8 nan]
 [6.4 2.8 5.6 2.1 nan]
 [7.2 3.  5.8 1.6 nan]
 [7.4 2.8 6.1 1.9 nan]
 [7.9 3.8 6.4 2.  nan]
 [6.4 2.8 5.6 2.2 nan]
 [6.3 2.8 5.1 1.5 nan]
 [6.1 2.6 5.6 1.4 nan]
 [7.7 3.  6.1 2.3 nan]
 [6.3 3.4 5.6 2.4 nan]
 [6.4 3.1 5.5 1.8 nan]
 [6.  3.  4.8 1.8 nan]
 [6.9 3.1 5.4 2.1 nan]
 [6.7 3.1 5.6 2.4 nan]
 [6.9 3.1 5.1 2.3 nan]
 [5.8 2.7 5.1 1.9 nan]
 [6.8 3.2 5.9 2.3 nan]
 [6.7 3.3 5.7 2.5 nan]
 [6.7 3.  5.2 2.3 nan]
 [6.3 2.5 5.  1.9 nan]
 [6.5 3.  5.2 2.  nan]
 [6.2 3.4 5.4 2.3 nan]
 [5.9 3.  5.1 1.8 nan]]

(1)在 iris_2d 数据集的 20 个随机位插入 np.nan 值

iris_2d[np.random.randint(0,150,size=20),np.random.randint(0,4,size=20)]=np.nan
iris_2d
array([[5.1, 3.5, 1.4, 0.2, nan],
       [nan, 3. , 1.4, 0.2, nan],
       [4.7, 3.2, 1.3, 0.2, nan],
       [4.6, 3.1, 1.5, 0.2, nan],
       [nan, 3.6, 1.4, 0.2, nan],
       [5.4, nan, 1.7, 0.4, nan],
       [4.6, 3.4, nan, nan, nan],
       [5. , 3.4, 1.5, 0.2, nan],
       [4.4, 2.9, 1.4, 0.2, nan],
       [4.9, 3.1, nan, 0.1, nan],
       [5.4, 3.7, 1.5, 0.2, nan],
       [4.8, nan, 1.6, 0.2, nan],
       [4.8, 3. , 1.4, 0.1, nan],
       [nan, 3. , nan, 0.1, nan],
       [5.8, 4. , 1.2, 0.2, nan],
       [5.7, 4.4, 1.5, nan, nan],
       [nan, 3.9, 1.3, 0.4, nan],
       [5.1, 3.5, 1.4, 0.3, nan],
       [5.7, 3.8, 1.7, 0.3, nan],
       [nan, 3.8, 1.5, 0.3, nan],
       [5.4, 3.4, 1.7, 0.2, nan],
       [5.1, 3.7, 1.5, 0.4, nan],
       [4.6, 3.6, nan, 0.2, nan],
       [5.1, 3.3, 1.7, 0.5, nan],
       [4.8, 3.4, 1.9, 0.2, nan],
       [5. , 3. , 1.6, 0.2, nan],
       [5. , 3.4, 1.6, 0.4, nan],
       [5.2, 3.5, 1.5, 0.2, nan],
       [5.2, 3.4, 1.4, 0.2, nan],
       [4.7, 3.2, 1.6, 0.2, nan],
       [4.8, nan, 1.6, 0.2, nan],
       [5.4, 3.4, 1.5, 0.4, nan],
       [5.2, 4.1, 1.5, 0.1, nan],
       [5.5, 4.2, 1.4, nan, nan],
       [nan, 3.1, 1.5, 0.1, nan],
       [5. , 3.2, 1.2, nan, nan],
       [5.5, 3.5, 1.3, 0.2, nan],
       [4.9, 3.1, 1.5, 0.1, nan],
       [4.4, nan, 1.3, 0.2, nan],
       [5.1, 3.4, 1.5, 0.2, nan],
       [5. , 3.5, 1.3, 0.3, nan],
       [4.5, nan, 1.3, nan, nan],
       [4.4, nan, 1.3, nan, nan],
       [nan, 3.5, 1.6, 0.6, nan],
       [5.1, 3.8, 1.9, 0.4, nan],
       [4.8, 3. , 1.4, 0.3, nan],
       [5.1, 3.8, nan, 0.2, nan],
       [4.6, 3.2, 1.4, 0.2, nan],
       [5.3, 3.7, nan, 0.2, nan],
       [5. , 3.3, nan, nan, nan],
       [7. , 3.2, 4.7, nan, nan],
       [6.4, 3.2, 4.5, 1.5, nan],
       [6.9, 3.1, 4.9, 1.5, nan],
       [nan, 2.3, 4. , 1.3, nan],
       [6.5, 2.8, 4.6, nan, nan],
       [5.7, 2.8, 4.5, 1.3, nan],
       [6.3, 3.3, 4.7, 1.6, nan],
       [4.9, 2.4, nan, 1. , nan],
       [6.6, 2.9, 4.6, 1.3, nan],
       [5.2, 2.7, 3.9, nan, nan],
       [5. , 2. , 3.5, 1. , nan],
       [5.9, 3. , 4.2, 1.5, nan],
       [6. , 2.2, 4. , 1. , nan],
       [6.1, 2.9, 4.7, 1.4, nan],
       [5.6, 2.9, 3.6, 1.3, nan],
       [6.7, 3.1, 4.4, 1.4, nan],
       [5.6, 3. , 4.5, 1.5, nan],
       [5.8, 2.7, nan, 1. , nan],
       [6.2, 2.2, 4.5, 1.5, nan],
       [5.6, 2.5, 3.9, 1.1, nan],
       [5.9, 3.2, 4.8, 1.8, nan],
       [6.1, 2.8, 4. , 1.3, nan],
       [6.3, nan, 4.9, 1.5, nan],
       [6.1, 2.8, 4.7, 1.2, nan],
       [6.4, 2.9, 4.3, nan, nan],
       [6.6, 3. , 4.4, 1.4, nan],
       [6.8, 2.8, 4.8, 1.4, nan],
       [6.7, 3. , 5. , 1.7, nan],
       [6. , 2.9, 4.5, nan, nan],
       [nan, 2.6, 3.5, 1. , nan],
       [5.5, 2.4, 3.8, 1.1, nan],
       [5.5, nan, nan, 1. , nan],
       [5.8, 2.7, 3.9, 1.2, nan],
       [6. , 2.7, nan, nan, nan],
       [5.4, 3. , 4.5, 1.5, nan],
       [6. , 3.4, 4.5, 1.6, nan],
       [6.7, 3.1, 4.7, nan, nan],
       [6.3, 2.3, 4.4, 1.3, nan],
       [5.6, 3. , 4.1, 1.3, nan],
       [5.5, 2.5, 4. , 1.3, nan],
       [5.5, nan, 4.4, 1.2, nan],
       [6.1, 3. , 4.6, 1.4, nan],
       [5.8, 2.6, 4. , 1.2, nan],
       [5. , 2.3, 3.3, nan, nan],
       [nan, 2.7, nan, 1.3, nan],
       [5.7, 3. , nan, 1.2, nan],
       [5.7, 2.9, nan, 1.3, nan],
       [6.2, 2.9, 4.3, 1.3, nan],
       [5.1, 2.5, 3. , 1.1, nan],
       [nan, 2.8, 4.1, 1.3, nan],
       [6.3, 3.3, nan, 2.5, nan],
       [5.8, 2.7, 5.1, 1.9, nan],
       [7.1, 3. , nan, 2.1, nan],
       [6.3, 2.9, 5.6, 1.8, nan],
       [6.5, 3. , 5.8, 2.2, nan],
       [7.6, 3. , 6.6, 2.1, nan],
       [4.9, 2.5, 4.5, 1.7, nan],
       [7.3, 2.9, 6.3, nan, nan],
       [6.7, 2.5, 5.8, 1.8, nan],
       [7.2, 3.6, 6.1, 2.5, nan],
       [6.5, 3.2, 5.1, 2. , nan],
       [6.4, 2.7, 5.3, nan, nan],
       [6.8, nan, 5.5, 2.1, nan],
       [5.7, 2.5, 5. , 2. , nan],
       [5.8, 2.8, 5.1, 2.4, nan],
       [6.4, 3.2, 5.3, nan, nan],
       [6.5, 3. , 5.5, 1.8, nan],
       [7.7, 3.8, 6.7, 2.2, nan],
       [7.7, 2.6, 6.9, 2.3, nan],
       [6. , 2.2, 5. , 1.5, nan],
       [6.9, 3.2, 5.7, 2.3, nan],
       [5.6, nan, 4.9, 2. , nan],
       [nan, 2.8, 6.7, 2. , nan],
       [6.3, 2.7, 4.9, nan, nan],
       [6.7, 3.3, 5.7, 2.1, nan],
       [7.2, 3.2, 6. , 1.8, nan],
       [6.2, nan, 4.8, 1.8, nan],
       [6.1, 3. , 4.9, 1.8, nan],
       [6.4, 2.8, 5.6, nan, nan],
       [nan, 3. , 5.8, 1.6, nan],
       [7.4, 2.8, 6.1, 1.9, nan],
       [7.9, nan, nan, 2. , nan],
       [6.4, 2.8, 5.6, 2.2, nan],
       [nan, 2.8, 5.1, 1.5, nan],
       [6.1, 2.6, 5.6, 1.4, nan],
       [nan, 3. , 6.1, 2.3, nan],
       [6.3, nan, nan, 2.4, nan],
       [6.4, 3.1, 5.5, 1.8, nan],
       [6. , 3. , 4.8, nan, nan],
       [6.9, 3.1, 5.4, 2.1, nan],
       [nan, 3.1, 5.6, 2.4, nan],
       [6.9, 3.1, 5.1, 2.3, nan],
       [5.8, nan, 5.1, nan, nan],
       [6.8, 3.2, nan, 2.3, nan],
       [6.7, 3.3, 5.7, 2.5, nan],
       [nan, 3. , 5.2, 2.3, nan],
       [6.3, nan, 5. , 1.9, nan],
       [6.5, 3. , 5.2, 2. , nan],
       [6.2, 3.4, 5.4, 2.3, nan],
       [nan, 3. , 5.1, 1.8, nan]])


(2) 在 iris_2d 的 sepallength(第 1 列)中查找缺失值的数量和位置

print("Number of missing values: \n", np.isnan(iris_2d[:, 0]).sum()) 
print("Position of missing values: \n", np.where(np.isnan(iris_2d[:, 0])))
Number of missing values: 
 18
Position of missing values: 
 (array([  1,   4,  13,  16,  19,  34,  43,  53,  79,  94,  99, 122, 129,
       133, 135, 140, 145, 149]),)

(3) 过滤具有 petallength(第 3 列)> 1.5 和 sepallength(第 1 列)<5.0 的 iris_2d 的行;

condition = (iris_2d[:, 2] > 1.5) & (iris_2d[:, 0] < 5.0)
r1 = iris_2d[condition]
print(r1)
[[4.8 nan 1.6 0.2 nan]
 [4.8 3.4 1.9 0.2 nan]
 [4.7 3.2 1.6 0.2 nan]
 [4.8 nan 1.6 0.2 nan]
 [4.9 2.5 4.5 1.7 nan]]

(4) 选择没有 nan 值的 iris_2d 数组的行

r = iris_2d[np.sum(np.isnan(iris_2d), axis = 1) == 0][:5]
print(r)
[]

(6) 在 numpy 数组中用 0 替换 nan

iris_2d[np.random.randint(150, size=20), np.random.randint(4, size=20)] = np.nan
iris_2d[np.isnan(iris_2d)] = 0
r1 = iris_2d[::]
print(r1)
[[5.1 3.5 1.4 0.2 0. ]
 [0.  3.  1.4 0.2 0. ]
 [4.7 3.2 1.3 0.2 0. ]
 [4.6 3.1 1.5 0.2 0. ]
 [0.  3.6 1.4 0.2 0. ]
 [5.4 0.  1.7 0.4 0. ]
 [4.6 3.4 0.  0.  0. ]
 [5.  3.4 1.5 0.  0. ]
 [4.4 2.9 1.4 0.2 0. ]
 [4.9 3.1 0.  0.1 0. ]
 [5.4 3.7 1.5 0.2 0. ]
 [4.8 0.  1.6 0.2 0. ]
 [4.8 3.  1.4 0.1 0. ]
 [0.  3.  0.  0.1 0. ]
 [5.8 4.  1.2 0.2 0. ]
 [5.7 4.4 1.5 0.  0. ]
 [0.  3.9 1.3 0.4 0. ]
 [5.1 3.5 1.4 0.3 0. ]
 [5.7 3.8 1.7 0.3 0. ]
 [0.  3.8 1.5 0.3 0. ]
 [5.4 3.4 1.7 0.2 0. ]
 [5.1 3.7 1.5 0.4 0. ]
 [4.6 3.6 0.  0.2 0. ]
 [5.1 3.3 1.7 0.  0. ]
 [4.8 3.4 1.9 0.2 0. ]
 [5.  3.  1.6 0.2 0. ]
 [5.  3.4 1.6 0.4 0. ]
 [5.2 3.5 1.5 0.2 0. ]
 [5.2 3.4 1.4 0.2 0. ]
 [4.7 0.  1.6 0.2 0. ]
 [4.8 0.  1.6 0.2 0. ]
 [5.4 3.4 0.  0.4 0. ]
 [5.2 4.1 1.5 0.1 0. ]
 [5.5 4.2 1.4 0.  0. ]
 [0.  3.1 1.5 0.1 0. ]
 [5.  3.2 1.2 0.  0. ]
 [5.5 3.5 1.3 0.2 0. ]
 [4.9 3.1 1.5 0.1 0. ]
 [4.4 0.  1.3 0.  0. ]
 [5.1 3.4 1.5 0.2 0. ]
 [5.  3.5 1.3 0.3 0. ]
 [4.5 0.  1.3 0.  0. ]
 [4.4 0.  1.3 0.  0. ]
 [0.  3.5 1.6 0.6 0. ]
 [5.1 3.8 1.9 0.4 0. ]
 [4.8 3.  1.4 0.3 0. ]
 [5.1 3.8 0.  0.2 0. ]
 [0.  3.2 1.4 0.2 0. ]
 [5.3 3.7 0.  0.2 0. ]
 [5.  3.3 0.  0.  0. ]
 [7.  3.2 4.7 0.  0. ]
 [6.4 3.2 4.5 1.5 0. ]
 [6.9 0.  4.9 1.5 0. ]
 [0.  2.3 4.  0.  0. ]
 [6.5 2.8 0.  0.  0. ]
 [5.7 2.8 4.5 1.3 0. ]
 [6.3 3.3 4.7 0.  0. ]
 [4.9 2.4 0.  1.  0. ]
 [6.6 2.9 4.6 1.3 0. ]
 [0.  2.7 3.9 0.  0. ]
 [5.  2.  3.5 1.  0. ]
 [0.  3.  0.  1.5 0. ]
 [6.  2.2 4.  1.  0. ]
 [6.1 2.9 4.7 1.4 0. ]
 [5.6 2.9 3.6 1.3 0. ]
 [6.7 3.1 0.  1.4 0. ]
 [5.6 3.  4.5 1.5 0. ]
 [5.8 2.7 0.  1.  0. ]
 [6.2 0.  4.5 1.5 0. ]
 [5.6 2.5 0.  1.1 0. ]
 [5.9 3.2 4.8 1.8 0. ]
 [6.1 2.8 4.  1.3 0. ]
 [6.3 0.  4.9 1.5 0. ]
 [6.1 2.8 4.7 1.2 0. ]
 [6.4 2.9 4.3 0.  0. ]
 [6.6 3.  4.4 1.4 0. ]
 [6.8 2.8 4.8 0.  0. ]
 [6.7 3.  0.  1.7 0. ]
 [6.  2.9 4.5 0.  0. ]
 [0.  2.6 3.5 1.  0. ]
 [5.5 2.4 3.8 1.1 0. ]
 [5.5 0.  0.  1.  0. ]
 [5.8 2.7 3.9 1.2 0. ]
 [6.  2.7 0.  0.  0. ]
 [0.  3.  4.5 1.5 0. ]
 [6.  0.  4.5 1.6 0. ]
 [6.7 0.  4.7 0.  0. ]
 [6.3 0.  4.4 1.3 0. ]
 [5.6 0.  4.1 1.3 0. ]
 [5.5 2.5 4.  1.3 0. ]
 [5.5 0.  4.4 1.2 0. ]
 [6.1 3.  4.6 1.4 0. ]
 [5.8 2.6 4.  1.2 0. ]
 [5.  2.3 3.3 0.  0. ]
 [0.  2.7 0.  1.3 0. ]
 [5.7 3.  0.  1.2 0. ]
 [5.7 2.9 0.  1.3 0. ]
 [6.2 2.9 4.3 1.3 0. ]
 [5.1 2.5 3.  1.1 0. ]
 [0.  2.8 4.1 1.3 0. ]
 [6.3 3.3 0.  2.5 0. ]
 [5.8 2.7 5.1 1.9 0. ]
 [7.1 3.  0.  2.1 0. ]
 [6.3 0.  5.6 1.8 0. ]
 [6.5 3.  5.8 2.2 0. ]
 [7.6 3.  6.6 2.1 0. ]
 [0.  2.5 4.5 1.7 0. ]
 [7.3 2.9 6.3 0.  0. ]
 [6.7 2.5 5.8 1.8 0. ]
 [7.2 3.6 6.1 2.5 0. ]
 [6.5 3.2 5.1 2.  0. ]
 [6.4 2.7 5.3 0.  0. ]
 [6.8 0.  5.5 2.1 0. ]
 [5.7 2.5 5.  2.  0. ]
 [5.8 2.8 5.1 2.4 0. ]
 [6.4 3.2 5.3 0.  0. ]
 [6.5 3.  5.5 1.8 0. ]
 [7.7 3.8 6.7 2.2 0. ]
 [7.7 2.6 6.9 2.3 0. ]
 [6.  2.2 5.  1.5 0. ]
 [6.9 3.2 5.7 2.3 0. ]
 [5.6 0.  4.9 2.  0. ]
 [0.  2.8 6.7 2.  0. ]
 [6.3 2.7 4.9 0.  0. ]
 [6.7 3.3 5.7 2.1 0. ]
 [7.2 3.2 6.  1.8 0. ]
 [6.2 0.  4.8 1.8 0. ]
 [6.1 3.  4.9 1.8 0. ]
 [6.4 2.8 5.6 0.  0. ]
 [0.  3.  5.8 1.6 0. ]
 [7.4 2.8 6.1 1.9 0. ]
 [7.9 0.  0.  2.  0. ]
 [0.  2.8 5.6 2.2 0. ]
 [0.  0.  5.1 1.5 0. ]
 [6.1 2.6 5.6 0.  0. ]
 [0.  3.  6.1 2.3 0. ]
 [6.3 0.  0.  2.4 0. ]
 [6.4 3.1 5.5 1.8 0. ]
 [6.  3.  4.8 0.  0. ]
 [0.  3.1 5.4 2.1 0. ]
 [0.  3.1 5.6 2.4 0. ]
 [6.9 0.  5.1 0.  0. ]
 [5.8 0.  5.1 0.  0. ]
 [6.8 3.2 0.  2.3 0. ]
 [6.7 3.3 5.7 2.5 0. ]
 [0.  3.  5.2 2.3 0. ]
 [6.3 0.  5.  1.9 0. ]
 [6.5 3.  5.2 2.  0. ]
 [6.2 3.4 5.4 2.3 0. ]
 [0.  3.  5.1 1.8 0. ]]

(7) 将 iris_2d 的花瓣长度(第 3 列)组成一个文本数组,如果花瓣长度为<3 则为'小',3-5 则为'中','> = 5 则为'大';

names = ('sepallength', 'sepalwidth', 'petallength', 'petalwidth', 'species')
   
petal_length_bin = np.digitize(iris_2d[:, 2].astype('float'), [0, 3, 5, 10])
label_map = {1: 'small', 2: 'medium', 3: 'large', 4: np.nan}
petal_length_cat = [label_map[x] for x in petal_length_bin]
r = petal_length_cat[::]
print(r)
['small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'small', 'medium', 'medium', 'medium', 'medium', 'small', 'medium', 'medium', 'small', 'medium', 'medium', 'medium', 'small', 'medium', 'medium', 'medium', 'small', 'medium', 'small', 'medium', 'small', 'medium', 'medium', 'medium', 'medium', 'medium', 'medium', 'medium', 'small', 'medium', 'medium', 'medium', 'small', 'medium', 'small', 'medium', 'medium', 'medium', 'medium', 'medium', 'medium', 'medium', 'medium', 'medium', 'medium', 'small', 'small', 'small', 'medium', 'medium', 'medium', 'small', 'large', 'small', 'large', 'large', 'large', 'medium', 'large', 'large', 'large', 'large', 'large', 'large', 'large', 'large', 'large', 'large', 'large', 'large', 'large', 'large', 'medium', 'large', 'medium', 'large', 'large', 'medium', 'medium', 'large', 'large', 'large', 'small', 'large', 'large', 'large', 'large', 'small', 'large', 'medium', 'large', 'large', 'large', 'large', 'small', 'large', 'large', 'large', 'large', 'large', 'large']

(8) 在 iris_2d 中为 volume 创建一个新列,其中 volume 是(pi petallength

sepal_length ^2)/ 3;

sepallength = iris_2d[:, 0].astype('float')
petallength = iris_2d[:, 2].astype('float')
volume = (np.pi * petallength * (sepallength**2))/3
volume = volume[:, np.newaxis]
out = np.hstack([iris_2d, volume])
r1 = out[:10]
print(r1)
[[ 5.1         3.5         1.4         0.2         0.         38.13265163]
 [ 0.          3.          1.4         0.2         0.          0.        ]
 [ 4.7         3.2         1.3         0.2         0.         30.07237208]
 [ 4.6         3.1         1.5         0.2         0.         33.23805027]
 [ 0.          3.6         1.4         0.2         0.          0.        ]
 [ 5.4         0.          1.7         0.4         0.         51.91167701]
 [ 4.6         3.4         0.          0.          0.          0.        ]
 [ 5.          3.4         1.5         0.          0.         39.26990817]
 [ 4.4         2.9         1.4         0.2         0.         28.38324243]
 [ 4.9         3.1         0.          0.1         0.          0.        ]]

(9) 查找在 iris 数据集的第 4 列花瓣宽度中第一次出现值大于 1.0 的位置

  
r1 = np.argwhere(iris_2d[:, 3].astype(float) > 1.0)[0]
print(r1)
[51]

6. 使用 numpy 数组计算由 5 个坐标:(1,9)、(5,12)、(8,20)、(4,10)、(2,8) 构成的图形的周长。

import numpy as np
v1=np.array([1,9,5,12,8,20,4,10,2,8])
v2=np.array([5,12,8,20,4,10,2,8,1,9])
dis1=np.sqrt(np.sum(np.square(v1-v2)))
#求范数
dis2=np.linalg.norm(v1-v2)
print(dis1) 
14.966629547095765


目录
相关文章
|
3月前
|
机器学习/深度学习 人工智能 数据处理
机器学习库:numpy
机器学习库:numpy
|
1月前
|
机器学习/深度学习 数据可视化 搜索推荐
Python在社交媒体分析中扮演关键角色,借助Pandas、NumPy、Matplotlib等工具处理、可视化数据及进行机器学习。
【7月更文挑战第5天】Python在社交媒体分析中扮演关键角色,借助Pandas、NumPy、Matplotlib等工具处理、可视化数据及进行机器学习。流程包括数据获取、预处理、探索、模型选择、评估与优化,以及结果可视化。示例展示了用户行为、话题趋势和用户画像分析。Python的丰富生态使得社交媒体洞察变得高效。通过学习和实践,可以提升社交媒体分析能力。
50 1
|
1月前
|
机器学习/深度学习 数据采集 大数据
驾驭大数据洪流:Pandas与NumPy在高效数据处理与机器学习中的核心作用
【7月更文挑战第13天】在大数据时代,Pandas与NumPy是Python数据分析的核心,用于处理复杂数据集。在一个电商销售数据案例中,首先使用Pandas的`read_csv`加载CSV数据,通过`head`和`describe`进行初步探索。接着,数据清洗涉及填充缺失值和删除异常数据。然后,利用`groupby`和`aggregate`分析销售趋势,并用Matplotlib可视化结果。在机器学习预处理阶段,借助NumPy进行数组操作,如特征缩放。Pandas的数据操作便捷性与NumPy的数值计算效率,共同助力高效的数据分析和建模。
44 3
|
1月前
|
机器学习/深度学习 数据采集 数据处理
重构数据处理流程:Pandas与NumPy高级特性在机器学习前的优化
【7月更文挑战第14天】在数据科学中,Pandas和NumPy是数据处理的关键,用于清洗、转换和计算。用`pip install pandas numpy`安装后,Pandas的`read_csv`读取数据,`fillna`处理缺失值,`drop`删除列。Pandas的`apply`、`groupby`和`merge`执行复杂转换。NumPy加速数值计算,如`square`进行向量化操作,`dot`做矩阵乘法。结合两者优化数据预处理,提升模型训练效率和效果。
28 1
|
2月前
|
机器学习/深度学习 人工智能 资源调度
机器学习之numpy基础——线性代数,不要太简单哦
机器学习之numpy基础——线性代数,不要太简单哦
52 6
|
2月前
|
机器学习/深度学习 人工智能 IDE
人工智能平台PAI操作报错合集之交互式建模(DSW)环境中,numpy模块如何正确安装
阿里云人工智能平台PAI (Platform for Artificial Intelligence) 是阿里云推出的一套全面、易用的机器学习和深度学习平台,旨在帮助企业、开发者和数据科学家快速构建、训练、部署和管理人工智能模型。在使用阿里云人工智能平台PAI进行操作时,可能会遇到各种类型的错误。以下列举了一些常见的报错情况及其可能的原因和解决方法。
|
3月前
|
机器学习/深度学习 数据采集 算法
探索NumPy与机器学习库的集成之路
【4月更文挑战第17天】本文探讨了NumPy在机器学习中的核心作用,它为各类机器学习库提供基础数据处理和数值计算能力。NumPy的线性代数、优化算法和随机数生成等功能,对实现高效模型训练至关重要。scikit-learn等库广泛依赖NumPy进行数据预处理。未来,尽管面临大数据和复杂模型的性能挑战,NumPy与机器学习库的集成将继续深化,推动技术创新。
|
3月前
|
机器学习/深度学习 数据采集 PyTorch
《Numpy 简易速速上手小册》第9章:Numpy 在机器学习中的应用(2024 最新版)
《Numpy 简易速速上手小册》第9章:Numpy 在机器学习中的应用(2024 最新版)
32 0
|
2月前
|
BI 测试技术 索引
Python学习笔记之NumPy模块——超详细(安装、数组创建、正态分布、索引和切片、数组的复制、维度修改、拼接、分割...)-1
Python学习笔记之NumPy模块——超详细(安装、数组创建、正态分布、索引和切片、数组的复制、维度修改、拼接、分割...)
|
1月前
|
SQL 并行计算 API
Dask是一个用于并行计算的Python库,它提供了类似于Pandas和NumPy的API,但能够在大型数据集上进行并行计算。
Dask是一个用于并行计算的Python库,它提供了类似于Pandas和NumPy的API,但能够在大型数据集上进行并行计算。