机器学习之Logistic回归(二)
1.1.4 Logistic Regression
logistic回归,虽然名称里带有“回归”的字样,但实际上是解决分类问题的线性模型,由条件概率
Logistic回归模型的特点:一个事件的几率(odds)是指该事件发生的概率与该时间不发生的概率的比值,如果发生为p,则不发生的为1-p,则对数几率为:
再由(1),(2)式化简可得:
结论:输出Y=1的对数几率是有输入x的线性函数表示的模型。
参数的估计
Logitic回归参数的估计使用的是似然函数:
然后在转换为对数似然函数,为loss function,也可加入L1和L2正则之后再用梯度下降进行参数更新,详细不再写下去,以上都是引用李航老师的《统计学习方法》,写的非常好。
其中条件概率函数即sigmoid函数,如下图:
输出范围是[0,1],函数图以下,将单次试验的可能结果输出为概率
探索Logisitc回归中的L1惩罚和稀疏性
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
# 导入数据
digits = datasets.load_digits()
X, y = digits.data, digits.target
# 数据标准化
X = StandardScaler().fit_transform(X)
# 0-4为0类,5-9为1类
y = (y > 4).astype(np.int)
# 以惩罚系数为选择目标
for i, C in enumerate((100, 1, 0.01)):
# tol:迭代终止判据的误差范围
clf_l1_LR = LogisticRegression(penalty='l1',C=C, tol=0.01)
clf_l1_LR.fit(X, y)
# 获得模型参数向量
coef_l1_LR = clf_l1_LR.coef_.ravel()
sparsity_l1_Lr = np.mean(coef_l1_LR == 0) * 100
print("C=%.2f" % C)
# 输出L1的稀疏程度
print("Sparsity with L1 penalty: %.2f" % sparsity_l1_Lr)
print("score with L1 penalty: %.2f" % clf_l1_LR.score(X, y))
l1_plot = plt.subplot(3, 1, i+1)
if i == 0:
l1_plot.set_title("L1 penalty")
l1_plot.imshow(np.abs(coef_l1_LR.reshape(8, 8)), interpolation='nearest',cmap='binary', vmax=1, vmin=0)
plt.text(-8, 3, "C = %.2f" % C)
l1_plot.set_xticks(())
l1_plot.set_yticks(())
plt.show()
有上图可知,惩罚洗漱越小,惩罚程度越大,稀疏程度也就越大
使用multi Logisitc Regression进行多分类
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model, datasets
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features.
Y = iris.target
h = .02 # step size in the mesh
logreg = linear_model.LogisticRegression(C=1e5)
# we create an instance of Neighbours Classifier and fit the data.
logreg.fit(X, Y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = logreg.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1, figsize=(4, 3))
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, edgecolors='k', cmap=plt.cm.Paired)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.show()