Multilabel classification
Multi-Label(多標籤) vs Multi-Class(多分類) : -
    一部電影可以分為普遍級、保護級、輔導級、限制級,那這部電影只會屬於其中一類,這就是 multi-class
    一部電影可以同時有很多種的類型例如喜劇、劇情、浪漫等等,這就是 multi-label
範例 : - 模擬multi-label document(多標籤檔案)的分類問題,數據集是依照下面的方式隨機生成的:
    1.
    pick the number of labels: n ~ Poisson(n_labels)
    2.
    n times, choose a class c: c ~ Multinomial(theta)
    3.
    pick the document length: k ~ Poisson(length)
    4.
    k times, choose a word: w ~ Multinomial(theta_c)
    5.
    Poisson distribution(帕松分布) : 適合於描述單位時間內隨機事件發生的次數的機率分布
    6.
    Multinomial distribution(多項式分布) : 多項分布是二項分布的延伸。例如,二項分布的典型範例為扔硬幣,正面槽上的機率為p, 重複扔n次,k次為正面的機率即是一個二項分布的機率。把二項分布公式推廣到多種狀態,就得到多項式分布。
透過上面的方法,剔除採樣的目的是為了確保n(label數)可以大於2,而且文件的長度不等於0。同樣,也排除已經選過的類別。備標註為2種類別的檔案會以雙重顏色的圈圈表示。
為了進行可視化,藉由PCA (Principal Component Analysis 主成分分析) 和CCA (Canonical Correlation Analysis 典型相關分析) 找到前兩個主要成分將數據projecting(投影)後來執行分類。使用sklearn.multiclass.OneVsRestClassifier,metaclassifier(元分類器)使用兩個帶有線性內核的SVC來學習每個類別的discriminative model(判別模型)。
    PCA用於執行unsupervised(無監督)的降維,而CCA用於執行supervised(監督)的降維。

(一)引入函式庫

    numpy : 產生陣列數值
    matplotlib.pyplot : 用來繪製影像
    sklearn.datasets import make_multilabel_classification : 生成隨機的多標籤分類問題
    sklearn.svm import SVC : 匯入Support Vector Classification
    sklearn.decomposition import PCA : 匯入Principal Component Analysis
    sklearn.cross_decomposition import CCA : 匯入Canonical Correlation Analysis
1
import numpy as np
2
import matplotlib.pyplot as plt
3
4
from sklearn.datasets import make_multilabel_classification
5
from sklearn.multiclass import OneVsRestClassifier
6
from sklearn.svm import SVC
7
from sklearn.decomposition import PCA
8
from sklearn.cross_decomposition import CCA
Copied!

(二)定義繪製hyperplane函式

    np.linspace() : 回傳指定區間內的相同間隔的數字
1
def plot_hyperplane(clf, min_x, max_x, linestyle, label):
2
# get the separating hyperplane
3
w = clf.coef_[0]
4
a = -w[0] / w[1]
5
xx = np.linspace(min_x - 5, max_x + 5) # make sure the line is long enough
6
yy = a * xx - (clf.intercept_[0]) / w[1]
7
plt.plot(xx, yy, linestyle, label=label)
Copied!

(三)定義繪製圖片函式

    PCA(n_components=None, copy=True, whiten=False, svd_solver='auto', tol=0.0, iterated_power='auto', random_state=None)
    n_components : 要保留的成分數,此範例是保留2項
    CCA(n_components=2, scale=True, max_iter=500, tol=1e-06, copy=True)
    n_components : 要保留的成分數,此範例是保留2項
    scale : 是否縮放數據
    OneVsRestClassifier(estimator, n_jobs=None): 一對一(OvR)的多類/多標籤策略
    estimator : 估計對象,此範例使用SVC
1
def plot_subfigure(X, Y, subplot, title, transform):
2
if transform == "pca":
3
X = PCA(n_components=2).fit_transform(X)
4
elif transform == "cca":
5
X = CCA(n_components=2).fit(X, Y).transform(X)
6
else:
7
raise ValueError
8
9
min_x = np.min(X[:, 0])
10
max_x = np.max(X[:, 0])
11
12
min_y = np.min(X[:, 1])
13
max_y = np.max(X[:, 1])
14
15
classif = OneVsRestClassifier(SVC(kernel='linear'))
16
classif.fit(X, Y)
17
18
plt.subplot(2, 2, subplot)
19
plt.title(title)
20
21
zero_class = np.where(Y[:, 0])
22
one_class = np.where(Y[:, 1])
23
plt.scatter(X[:, 0], X[:, 1], s=40, c='gray', edgecolors=(0, 0, 0))
24
plt.scatter(X[zero_class, 0], X[zero_class, 1], s=160, edgecolors='b',
25
facecolors='none', linewidths=2, label='Class 1')
26
plt.scatter(X[one_class, 0], X[one_class, 1], s=80, edgecolors='orange',
27
facecolors='none', linewidths=2, label='Class 2')
28
29
plot_hyperplane(classif.estimators_[0], min_x, max_x, 'k--',
30
'Boundary\nfor class 1')
31
plot_hyperplane(classif.estimators_[1], min_x, max_x, 'k-.',
32
'Boundary\nfor class 2')
33
plt.xticks(())
34
plt.yticks(())
35
36
plt.xlim(min_x - .5 * max_x, max_x + .5 * max_x)
37
plt.ylim(min_y - .5 * max_y, max_y + .5 * max_y)
38
if subplot == 2:
39
plt.xlabel('First principal component')
40
plt.ylabel('Second principal component')
41
plt.legend(loc="upper left")
42
43
44
plt.figure(figsize=(8, 6))
Copied!

(四)呼叫函式並輸出圖片

1
X, Y = make_multilabel_classification(n_classes=2, n_labels=1,
2
allow_unlabeled=True,
3
random_state=1)
4
5
plot_subfigure(X, Y, 1, "With unlabeled samples + CCA", "cca")
6
plot_subfigure(X, Y, 2, "With unlabeled samples + PCA", "pca")
7
8
X, Y = make_multilabel_classification(n_classes=2, n_labels=1,
9
allow_unlabeled=False,
10
random_state=1)
11
12
plot_subfigure(X, Y, 3, "Without unlabeled samples + CCA", "cca")
13
plot_subfigure(X, Y, 4, "Without unlabeled samples + PCA", "pca")
14
15
plt.subplots_adjust(.04, .02, .97, .94, .09, .2)
16
plt.show()
Copied!
    在圖中,“未標記樣本”並不意味著我們不知道標記(如在半監督學習中一樣),而是樣本根本沒有標記。

(五)完整程式碼

1
print(__doc__)
2
3
import numpy as np
4
import matplotlib.pyplot as plt
5
6
from sklearn.datasets import make_multilabel_classification
7
from sklearn.multiclass import OneVsRestClassifier
8
from sklearn.svm import SVC
9
from sklearn.decomposition import PCA
10
from sklearn.cross_decomposition import CCA
11
12
13
def plot_hyperplane(clf, min_x, max_x, linestyle, label):
14
# get the separating hyperplane
15
w = clf.coef_[0]
16
a = -w[0] / w[1]
17
xx = np.linspace(min_x - 5, max_x + 5) # make sure the line is long enough
18
yy = a * xx - (clf.intercept_[0]) / w[1]
19
plt.plot(xx, yy, linestyle, label=label)
20
21
22
def plot_subfigure(X, Y, subplot, title, transform):
23
if transform == "pca":
24
X = PCA(n_components=2).fit_transform(X)
25
elif transform == "cca":
26
X = CCA(n_components=2).fit(X, Y).transform(X)
27
else:
28
raise ValueError
29
30
min_x = np.min(X[:, 0])
31
max_x = np.max(X[:, 0])
32
33
min_y = np.min(X[:, 1])
34
max_y = np.max(X[:, 1])
35
36
classif = OneVsRestClassifier(SVC(kernel='linear'))
37
classif.fit(X, Y)
38
39
plt.subplot(2, 2, subplot)
40
plt.title(title)
41
42
zero_class = np.where(Y[:, 0])
43
one_class = np.where(Y[:, 1])
44
plt.scatter(X[:, 0], X[:, 1], s=40, c='gray', edgecolors=(0, 0, 0))
45
plt.scatter(X[zero_class, 0], X[zero_class, 1], s=160, edgecolors='b',
46
facecolors='none', linewidths=2, label='Class 1')
47
plt.scatter(X[one_class, 0], X[one_class, 1], s=80, edgecolors='orange',
48
facecolors='none', linewidths=2, label='Class 2')
49
50
plot_hyperplane(classif.estimators_[0], min_x, max_x, 'k--',
51
'Boundary\nfor class 1')
52
plot_hyperplane(classif.estimators_[1], min_x, max_x, 'k-.',
53
'Boundary\nfor class 2')
54
plt.xticks(())
55
plt.yticks(())
56
57
plt.xlim(min_x - .5 * max_x, max_x + .5 * max_x)
58
plt.ylim(min_y - .5 * max_y, max_y + .5 * max_y)
59
if subplot == 2:
60
plt.xlabel('First principal component')
61
plt.ylabel('Second principal component')
62
plt.legend(loc="upper left")
63
64
65
plt.figure(figsize=(8, 6))
66
67
X, Y = make_multilabel_classification(n_classes=2, n_labels=1,
68
allow_unlabeled=True,
69
random_state=1)
70
71
plot_subfigure(X, Y, 1, "With unlabeled samples + CCA", "cca")
72
plot_subfigure(X, Y, 2, "With unlabeled samples + PCA", "pca")
73
74
X, Y = make_multilabel_classification(n_classes=2, n_labels=1,
75
allow_unlabeled=False,
76
random_state=1)
77
78
plot_subfigure(X, Y, 3, "Without unlabeled samples + CCA", "cca")
79
plot_subfigure(X, Y, 4, "Without unlabeled samples + PCA", "pca")
80
81
plt.subplots_adjust(.04, .02, .97, .94, .09, .2)
82
plt.show()
Copied!
Last modified 1yr ago