版权声明: https://blog.csdn.net/qq_26386707/article/details/79404122
Machine Learning(3)Least-squares classification
Chenjing Ding
2018/02/28
notation | meaning |
---|---|
M | the number of mixture components |
x_n | n-th input vector |
N | the number of training input vectors |
K | the number of classes |
w | a vector of the weight matrix |
W | weight matrix |
X | input metrix |
To put it clearly, all vectors in this passage are column vector, the transpose of them are row vector; and all Capital letter represents matrix, otherwise it represents a vector.
1.General Classification Problem
1.1 one sample input case
Let’s consider K discriminant linear models:
then we obtain which is a column vector,
1.2 input as a matrix
For entire data set, X is a matrix.
and is column vectors , T and are matrix, T is the target matrix ;
2. Closed-form solution
Try to find the closed-form solution of W, directly to minimize the sum-of-squares error:
and for all matrix A the inverse of must exist.
Thus the closed form solution for is :
3. Problems
- Least-squares is very sensitive to outliers!
Least-squares corresponds to Maximum Likelihood under the assumption of a Gaussian conditional distribution.However, our binary target vectors have a distribution that is clearly non-Gaussian (0-1 distribution when K is 2)!
discuss later