Xgboost: 使用 xgboost 进行交叉验证后进行预测 [问题]

创建于 2014-11-01  ·  3评论  ·  资料来源: dmlc/xgboost

This is my first trial with xgboost (very fast!).这是我第一次试用 xgboost(非常快!)。 But I'm a little bit confused .但我有点困惑。
In fact, I trained a model using xgb.cv as follows:事实上,我使用 xgb.cv 训练了一个模型,如下所示:
xgbmodel=xgb.cv(params=param, data=trainingdata, nrounds=100, nfold=5,showsd=T,metrics='logloss') xgbmodel=xgb.cv(params=param, data=trainingdata, nrounds=100, nfold=5,showsd=T,metrics='logloss')
Now I want to predict with my test set but xgbmodel seems to be a logical value (TRUE in this case)现在我想用我的测试集进行预测,但 xgbmodel 似乎是一个逻辑值(在这种情况下为 TRUE)
How could I predict after cv?我怎么能在 cv 之后预测? Should I use xgb.train then?那我应该使用 xgb.train 吗?
HR人力资源

en

最有用的评论

Yes, the xgb.cv does not return the model, but the cv history of the process.是的,xgb.cv 不返回模型,而是过程的 cv 历史记录。 Since in cv we are training n models to evaluate the result.因为在 cv 中,我们正在训练 n 个模型来评估结果。

A normal use case of cv is to select parameters, so usually you use cv to find a good parameter, and use xgb.train to train the model on the entire dataset cv的一个正常用例是选择参数,所以通常你用cv找一个好的参数,然后用xgb.train在整个数据集上训练模型

en

所有3条评论

Yes, the xgb.cv does not return the model, but the cv history of the process.是的,xgb.cv 不返回模型,而是过程的 cv 历史记录。 Since in cv we are training n models to evaluate the result.因为在 cv 中,我们正在训练 n 个模型来评估结果。

A normal use case of cv is to select parameters, so usually you use cv to find a good parameter, and use xgb.train to train the model on the entire dataset cv的一个正常用例是选择参数,所以通常你用cv找一个好的参数,然后用xgb.train在整个数据集上训练模型

en

好的,现在更清楚了

en

Hi,你好,

There is a parameter prediction=TRUE in xgb.cv, which returns the prediction of cv folds. xgb.cv中有一个参数prediction=TRUE,返回的是cv折叠的预测。 But it is not clear from the document that for which nround, the predictions are returned?但是从文档中并不清楚返回哪个nround的预测?

en
此页面是否有帮助?
0 / 5 - 0 等级