1、获得权重
1)查看模型训练数据集的特征权重,利于分析模型对特征的偏好、调整模型参数、分析数据很重要。
2)例子如下:
import xgboost as xgb
def load_model(X_train, X_test, y_train, y_test):
params = {
'booster': 'gbtree',
'objective': 'multi:softmax',
'gamma': 0.1,
'max_depth': 8,
'lambda': 2,
'subsample': 0.7,
'colsample_bytree': 0.7,
'min_child_weight': 3,
'silent': 1,
'eta': 0.1,
'seed': 123,
'nthread': 4,
}
xgb_train = xgb.DMatrix(X_train, y_train)
xgb_parms = params.items()
xgb_num_items = 100
model = xgb.train(xgb_parms, xgb_train, xgb_num_items)
# 权重值,可以选择不同的参数
# importance_weight = model.get_score(mportance_type='weight')
# 度量方式:信息增益
# importance_weight = model.get_score(mportance_type='gain')
importance_weight = model.get_score()
print(importance_weight)
2、sklearn中随机产生测试集和训练集
1)train_test_split中的参数描述
data:没有标签的数据集,输入的可以是list,numpy arrays,cipy-sparse 矩阵或者pandas中的DataFrame对象
label:数据集标签,输入的可以是list,numpy arrays,cipy-sparse 矩阵或者pandas中的DataFrame对象
train_size:训练集占的百分比,数值只能是0.0-0.1
test_size:测试集占的百分比,数值只能是0.0-0.1
random_state:随机状态,即一个随机数
2)例子如下:
#导入包
from sklearn.model_selection import train_test_split
#分离测试集和训练集
X_train, X_test, y_train, y_test = train_test_split(data, label, test_size = 0.2, random_state=123)