åºåã«é¢ããŠç¹å®ã®æ©èœã®å調å¶çŽããµããŒãããããšã«ã€ããŠãããã€ãã®ãªã¯ãšã¹ãããããŸããã
ã€ãŸããä»ã®æ©èœãåºå®ãããŠããå Žåãç¹å®ã®æå®ãããæ©èœã«å¯ŸããŠäºæž¬ãå調ã«å¢å ãããŸãã ãã®æ©èœã«é¢ããäžè¬çãªé¢å¿ã確èªããããã«ããã®å·ãéããŠããŸãã ããã«ååãªé¢å¿ãããã°ããããè¿œå ã§ããŸãã
ããŒã¿æ©èœããã¹ããããã®æ©èœã®äœ¿çšã«é¢ããããã¥ã¡ã³ããšãã¥ãŒããªã¢ã«ãæäŸããã«ã¯ãã³ãã¥ããã£ã®ãã©ã³ãã£ã¢ã®å©ããå¿ èŠã§ãã èå³ã®ããæ¹ã¯ãåé¡ã«è¿ä¿¡ããŠãã ãã
å®éšçãªããŒãžã§ã³ã¯https://github.com/dmlc/xgboost/pull/1516ã§æäŸãããŠããŸãhttps://github.com/tqchen/xgboost ã
次ã®ãªãã·ã§ã³ããªã³ã«ããŸãïŒPythonãr APIãä»ããŠå¯èœã§ããå¯èœæ§ããããŸãïŒ
monotone_constraints = "(0,1,1,0)"
2ã€ã®è°è«ããããŸã
monotone_constraints
ã¯ãç¹åŸŽã®æ°ã®é·ãã®ãªã¹ãã§ãã1ã¯å調ãªå¢å ã瀺ãã-1ã¯æžå°ãæå³ãã0ã¯å¶çŽããªãããšãæå³ããŸãã ãã£ãŒãã£ã®æ°ããçãå Žåã¯ã0ãåã蟌ãŸããŸããçŸåšããã«ãã³ã¢ã§ã®ã¿ãµããŒããããŠããæ£ç¢ºãªæ¬²åŒµãã¢ã«ãŽãªãºã ã åæ£çã§ã¯ãŸã å©çšã§ããŸãã
@tqchenä»æ¥ãä»äºã§ãä»ã®ããã€ãã®ã¢ãã«ã®ããã©ãŒãã³ã¹ãšæ¯èŒããŠãã¹ãããããã«ãå調ãªå¶çŽãæã€ããã€ãã®GBMãæ§ç¯ããããã«èŠæ±ãããŸããã ããã¯ãtweedieã®éžè±åºŠæ倱ã䌎ããããçŸåšã®ã«ã¹ã¿ã æ倱é¢æ°ã䜿çšããå¿ èŠããããŸãã
ãããã«ãããæå©ãããããšåæã«ããã€ãã®ä»äºãæãéããè¯ããã£ã³ã¹ã®ããã§ãã
ããã§ã®è©±ã«åºã¥ããŠãGBMïŒRããã±ãŒãžïŒã¯ããŒã«ã«ã§ã®ã¿å調æ§ã匷å¶ããŸãã
XGBoostãå調ãªå¶çŽãã©ã®ããã«å®æœããããæ確ã«ã§ããŸããïŒ
XGBoostãã°ããŒãã«ãªå¶çŽã匷å¶ã§ãããšããã§ããã
ããŒã«ã«å¶çŽãŸãã¯gloablå¶çŽã®æå³ãããããŸãããã詳ãã説æããŠããã ããŸããïŒ
ç³ãèš³ãããŸããããééã£ããªã³ã¯ã貌ãä»ããŸããããããæ£ãããªã³ã¯ã§ãïŒãªã³ã¯ïŒ
åããªãŒã¯ãé¢å¿ã®ããæ©èœã®ç¹å®ã®ãµãã»ããã§ã®ã¿å調ãªå¶çŽã«åŸãå¯èœæ§ããããããå€ãã®ããªãŒãäžç·ã«ã¢ã³ãµã³ãã«ãããšããã®æ©èœã®å
šç¯å²ã§å
šäœçãªå調æ§ã®éåãçºçããå¯èœæ§ããããŸãã
OKãç§ã®ç解ã§ã¯ãããã¯ã°ããŒãã«ã«å®æœãããŠããŸãã ãã²ãè©Šããã ããã
åå€éååž°ã®ã³ã³ããã¹ãã§å調æ§å¶çŽã®ããã€ãã®ç°¡åãªãã¹ããå®è¡ããŸããã ã³ãŒããšããã€ãã®éåžžã«ç°¡åãªããã¥ã¡ã³ãã¯ããã«ãããŸãïŒ
ããã€ãã®æåã®èŠ³å¯ïŒ
å¶çŽ= -1ã®å Žåã«ãã°ãçºçããããšãããããŸãããä¿®æ£ãããã·ã¥ããŸãããææ°ããŒãžã§ã³ãæ£åžžã«æ©èœãããã©ããã確èªããŠãã ãããè€æ°ã®å¶çŽãããå Žåã«ãæ©èœãããã©ããã確èªããŠãã ãã
@tqchenæžå°ãããã°ã«ã€ããŠä¿®æ£ããã¹ãããŸããããçŸåšã¯æ©èœããŠããããã§ãã
äžéšã®æšæºããŒã¿ã»ããã§å ã®ããŒãžã§ã³ãšæ¯èŒããŠé床ãäœäžããŠãããã©ããã確èªããŠããããããããŒãžã§ããŸã
@tqchen 2ã€ã®å€æ°ã¢ãã«ããã¹ãããŸããã1ã€ã¯å¶çŽãå¢å ãããã1ã€ã¯æžå°ããŸãã
params_constrained = params.copy()
params_constrained['updater'] = "grow_monotone_colmaker,prune"
params_constrained['monotone_constraints'] = "(1,-1)"
çµæã¯è¯ãã§ã
ä»æ¥ã®ååŸãã¿ã€ãã³ã°ãã¹ããè¡ãæéãå°ãèŠã€ããããšæããŸãã
ã¢ã³ããŒã³ãªãã·ã§ã³ã®èªåæ€åºãå¯èœã«ããããã«ïŒ1516ãæŽæ°ããŸãããçŸåšããŠãŒã¶ãŒã¯monotone_constraints = "(0,1,1,0)"
ãæž¡ãã ãã§æžã¿ãŸãããããæ©èœãããã©ããã確èªããŠãã ããã
é床ãã¹ããããŸããã£ãããããããŒãžãããã¥ãŒããªã¢ã«ãè¿œå ãã次ã®æ®µéã«é²ã¿ãŸããã
@madrury @ XiaoxiaoWang87
ããã«å€å€éã±ãŒã¹ã®ãã¹ããè¿œå ããŸããïŒ
no constraint: 964.9 microseconds per iteration
with constraint: 861.7 microseconds per iteration
ïŒé床ãã¹ããè¡ãããã®ããè¯ãæ¹æ³ãããå Žåã¯ã³ã¡ã³ãããŠãã ããïŒ
Check failed: (wleft) <= (wright)
åå ã§ã³ãŒããã¯ã©ãã·ã¥ããŸããç§ã¯jupyterããŒãããã¯ã§ããã€ãã®ã¿ã€ãã³ã°å®éšãå®è¡ããŸããã
æåã®ãã¹ãïŒããã€ãã®åçŽãªã·ãã¥ã¬ãŒã·ã§ã³ããŒã¿ã 2ã€ã®ç¹åŸŽããããŸãã1ã€ã¯å¢å ãããã1ã€ã¯æžå°ããŸãããå°ããªæ£åŒŠæ³¢ãéãåããããŠãããããåç¹åŸŽã¯çã«å調ã§ã¯ãããŸããã
X = np.random.random(size=(N, K))
y = (5*X[:, 0] + np.sin(5*2*pi*X[:, 0])
- 5*X[:, 1] - np.cos(5*2*pi*X[:, 1])
+ np.random.normal(loc=0.0, scale=0.01, size=N))
å調å¶çŽãããå Žåãšãªãå Žåã®xgboostã®ã¿ã€ãã³ã°çµæã¯æ¬¡ã®ãšããã§ãã æ©ææã¡åãããªãã«ããŠãããããã®å埩åæ°ãå¢ãããŸããã
æåã«å調ãªå¶çŽãªãïŒ
%%timeit -n 100
model_no_constraints = xgb.train(params, dtrain,
num_boost_round = 2500,
verbose_eval = False)
100 loops, best of 3: 246 ms per loop
ãããŠããã«å調æ§ã®å¶çŽããããŸã
%%timeit -n 100
model_with_constraints = xgb.train(params_constrained, dtrain,
num_boost_round = 2500,
verbose_eval = False)
100 loops, best of 3: 196 ms per loop
2çªç®ã®ãã¹ãïŒsklearnããã®ã«ãªãã©ã«ãã¢hHousingããŒã¿ã å¶çŽãªã
%%timeit -n 10
model_no_constraints = xgb.train(params, dtrain,
num_boost_round = 2500,
verbose_eval = False)
10 loops, best of 3: 5.9 s per loop
ãããç§ã䜿çšããå¶çŽã§ã
print(params_constrained['monotone_constraints'])
(1,1,1,0,0,1,0,0)
ãããŠãå¶çŽãããã¢ãã«ã®ã¿ã€ãã³ã°
%%timeit -n 10
model_no_constraints = xgb.train(params, dtrain,
num_boost_round = 2500,
verbose_eval = False)
10 loops, best of 3: 6.08 s per loop
@ XiaoxiaoWang87ç§ã¯å¥ã®PRãããã·ã¥ããŠãwleftãšwrightã®ãã§ãã¯ãç·©ããŸããããããæ©èœããããšã確èªããŠãã ããã
@madruryå¶çŽæ©èœã®ãªã以åã®ããŒãžã§ã³ã®XGBoostãšæ¯èŒããããšãã§ããŸããïŒ
@tqchenãã¡ããã§ãã æ¯èŒããã³ãããããã·ã¥ããå§ãã§ããŸããïŒ å調ãªå¶çŽãè¿œå ããåã«ãã³ãããã䜿çšããå¿ èŠããããŸããïŒ
ã¯ããåã®ãã®ã§ååã§ã
@tqchenæŽæ°ãããããŒãžã§ã³ãåæ§ç¯ãããšã以åã¯çºçããŠããªãã£ããšã©ãŒãçºçããŸãã çç±ãã¯ã£ãããšããªãã«é£ã³åºãããšãé¡ã£ãŠããŸãã
以åãšåãã³ãŒããå®è¡ããããšãããšãäŸå€ãçºçããŸããå®å šãªãã¬ãŒã¹ããã¯ã¯æ¬¡ã®ãšããã§ãã
XGBoostError Traceback (most recent call last)
<ipython-input-14-63a9f6e16c9a> in <module>()
8 model_with_constraints = xgb.train(params, dtrain,
9 num_boost_round = 1000, evals = evallist,
---> 10 early_stopping_rounds = 10)
/Users/matthewdrury/anaconda/lib/python2.7/site-packages/xgboost-0.6-py2.7.egg/xgboost/training.pyc in train(params, dtrain, num_boost_round, evals, obj, feval, maximize, early_stopping_rounds, evals_result, verbose_eval, learning_rates, xgb_model, callbacks)
201 evals=evals,
202 obj=obj, feval=feval,
--> 203 xgb_model=xgb_model, callbacks=callbacks)
204
205
/Users/matthewdrury/anaconda/lib/python2.7/site-packages/xgboost-0.6-py2.7.egg/xgboost/training.pyc in _train_internal(params, dtrain, num_boost_round, evals, obj, feval, xgb_model, callbacks)
72 # Skip the first update if it is a recovery step.
73 if version % 2 == 0:
---> 74 bst.update(dtrain, i, obj)
75 bst.save_rabit_checkpoint()
76 version += 1
/Users/matthewdrury/anaconda/lib/python2.7/site-packages/xgboost-0.6-py2.7.egg/xgboost/core.pyc in update(self, dtrain, iteration, fobj)
804
805 if fobj is None:
--> 806 _check_call(_LIB.XGBoosterUpdateOneIter(self.handle, iteration, dtrain.handle))
807 else:
808 pred = self.predict(dtrain)
/Users/matthewdrury/anaconda/lib/python2.7/site-packages/xgboost-0.6-py2.7.egg/xgboost/core.pyc in _check_call(ret)
125 """
126 if ret != 0:
--> 127 raise XGBoostError(_LIB.XGBGetLastError())
128
129
XGBoostError: [14:08:41] src/tree/tree_updater.cc:18: Unknown tree updater grow_monotone_colmaker
å®è£ ããããŒã¯ãŒãåŒæ°ã®ãã¹ãŠãåãæ¿ãããšããšã©ãŒãçºçããŸãã
TypeError Traceback (most recent call last)
<ipython-input-15-ef7671f72925> in <module>()
8 monotone_constraints="(1)",
9 num_boost_round = 1000, evals = evallist,
---> 10 early_stopping_rounds = 10)
TypeError: train() got an unexpected keyword argument 'monotone_constraints'
ã¢ããããŒã¿åŒæ°ãåé€ãããã©ã¡ãŒã¿ã«å調å¶çŽåŒæ°ãä¿æããŸããããã§ãå調å¶çŽãæ瀺ããããšãã«å調å¶çŽã¢ããããŒã¿ãèªåçã«ã¢ã¯ãã£ãåãããŸãã
@tqchenç§ã®ä»²éã®@amontzã¯ãã¡ãã»ãŒãžãæçš¿ããçŽåŸã«ãããmonotone_constraints
ã.train
ãžã®ã¯ã¯ãŒã°ãšããŠæž¡ããšè§£éããŸããã
ãããã®èª¿æŽã§æ©èœããŸãã ããããšãã
@madruryé床ã確èªã§ããŸããïŒ
ãŸãã @ madruryãš@ XiaoxiaoWang87ã¯ããã®æ©èœã
ipyããŒãããã¯ãã¡ã€ã³ãªããžããªã«çŽæ¥æã¡èŸŒãããšã¯ã§ããŸããã ãã ããç»åã¯https://github.com/dmlc/web-data/tree/master/xgboostã«ããã·ã¥ããã¡ã€ã³ãªããžããªã«ããŒã¯ããŠã³ããããšãã§ããŸãã
ãŸããããã³ããšã³ãã€ã³ã¿ãŒãã§ã€ã¹ã®æååå€æãå€æŽããŠãintã¿ãã«ãããã¯ãšã³ãã§åãå ¥ããããæååã¿ãã«åœ¢åŒã«å€æã§ããããã«ããå¿ èŠããããŸãã
Rã®å€æŽã«ã€ããŠã¯@ hetong007 ãJuliaã®å Žåã¯@slundberg
@tqchen Juliaã¯çŸåšXGBoostã®0.4ããŒãžã§ã³ã«æ¥ç¶ãããŠããã®ã§ã次åããã䜿çšããå¿ èŠããããæéãåã£ãŠããå¿ èŠããããŸãããããŸã§ã«ä»ã«èª°ãããªãå Žåã¯ããã€ã³ãã£ã³ã°ãæŽæ°ããŸãã ãã®æç¹ã§ããã®å€æŽãè¿œå ã§ããŸãã
ããã¯ãå®è£ åããå®è£ åŸãŸã§ã®å調ãªå¶çŽã®ãªãã¢ãã«éã®æ¯èŒã§ãã
ã³ããã8cac37 ïŒå調å¶çŽã®å®è£
åã
ã·ãã¥ã¬ãŒããããããŒã¿ïŒ 100 loops, best of 3: 232 ms per loop
ã«ãªãã©ã«ãã¢ããŒã¿ïŒ 10 loops, best of 3: 5.89 s per loop
ã³ãããb1c224 ïŒå調å¶çŽã®å®è£
åŸã
ã·ãã¥ã¬ãŒããããããŒã¿ïŒ 100 loops, best of 3: 231 ms per loop
ã«ãªãã©ã«ãã¢ããŒã¿ïŒ 10 loops, best of 3: 5.61 s per loop
å®è£ åŸã®ã«ãªãã©ã«ãã¢ã®ã¹ããŒãã¢ããã¯ç§ã«ã¯çãããããã«èŠããŸãããç§ã¯ãããçé2åè©ŠããŸããããããã¯äžè²«ããŠããŸãã
ãã¥ãŒããªã¢ã«ãæžããŠã¿ãŠãã ããã æ¢åã®ããã¥ã¡ã³ããèŠãŠãæ°æ¥äžã«äœãããŸãšããŸãã
ããã¯çŽ æŽãããããšã§ããPRã¯æ£åŒã«ãã¹ã¿ãŒã«ããŒãžãããŸããã ãã¥ãŒããªã¢ã«ã楜ãã¿ã«ããŠããŸã
@madruryã«æè¬ããŸãã ããã楜ãã¿ã«åŸ ã€ã äœãã§ãããæããŠãã ããã ç§ã¯ç¢ºãã«ãã®ãããã¯ã«ã€ããŠãã£ãšç 究ããããšæã£ãŠããŸãã
ææ¥åŒ·åããŸãã é åã§ã¯ãªãæååãä»ããŠC ++ãšéä¿¡ããçç±ã«ã€ããŠç¥ãããã ãã§ãã
Rãããã¹ãããŠããŸãã2å€æ°ããŒã¿ãã©ã³ãã ã«çæããäºæž¬ãè©Šã¿ãŠããŸãã
ããããç§ã¯ãããèŠã€ããŸãã
monotone_constraints
ã«ãããäºæž¬ããããã«ç°ãªããŸããééããå Žåã¯ãææãã ããã
ãããåçŸããããã®ã³ãŒãïŒ drat
ããã§ã¯ãªããææ°ã®githubããŒãžã§ã³ã§ãã¹ãæžã¿ïŒïŒ
set.seed(1024)
x1 = rnorm(1000, 10)
x2 = rnorm(1000, 10)
y = -1*x1 + rnorm(1000, 0.001) + 3*sin(x2)
train = cbind(x1, x2)
bst = xgboost(data = train, label = y, max_depth = 2,
eta = 0.1, nthread = 2, nrounds = 10,
monotone_constraints = '(1,-1)')
pred = predict(bst, train)
ind = order(train[,1])
pred.ord = pred[ind]
plot(train[,1], y, main = 'with constraint')
pred.ord = pred[order(train[,1])]
lines(pred.ord)
bst = xgboost(data = train, label = y, max_depth = 2,
eta = 0.1, nthread = 2, nrounds = 10)
pred = predict(bst, train)
ind = order(train[,1])
pred.ord = pred[ind]
plot(train[,1], y, main = 'without constraint')
pred.ord = pred[order(train[,1])]
lines(pred.ord)
å¶çŽã¯åé åºã§è¡ãããŸããã ãããã£ãŠãå¶çŽã¯ãä»ã®è»žãåºå®ãããŸãŸãã¢ã³ããŒã³è»žã移åããå Žåã«ã®ã¿é©çšãããŸãã
@ hetong007ç§ã®ãããããäœãããã«ç§ã¯
seq
ããŸããcolmeans
ã«ãªããŸããäžèšã®ããããã«äœ¿çšããPythonã³ãŒãã¯ããããåçã®Rã³ãŒãã«éåžžã«ç°¡åã«å€æã§ããã¯ãã§ãã
def plot_one_feature_effect(model, X, y, idx=1):
x_scan = np.linspace(0, 1, 100)
X_scan = np.empty((100, X.shape[1]))
X_scan[:, idx] = x_scan
left_feature_means = np.tile(X[:, :idx].mean(axis=0), (100, 1))
right_feature_means = np.tile(X[:, (idx+1):].mean(axis=0), (100, 1))
X_scan[:, :idx] = left_feature_means
X_scan[:, (idx+1):] = right_feature_means
X_plot = xgb.DMatrix(X_scan)
y_plot = model.predict(X_plot, ntree_limit=bst.best_ntree_limit)
plt.plot(x_scan, y_plot, color = 'black')
plt.plot(X[:, idx], y, 'o', alpha = 0.25)
ãããç§ãéšåäŸåãããããè¡ãæ¹æ³ã§ãïŒä»»æã®ã¢ãã«ã«å¯ŸããŠïŒïŒ
ã³ãŒãïŒ
def plot_partial_dependency(bst, X, y, f_id):
X_temp = X.copy()
x_scan = np.linspace(np.percentile(X_temp[:, f_id], 0.1), np.percentile(X_temp[:, f_id], 99.5), 50)
y_partial = []
for point in x_scan:
X_temp[:, f_id] = point
dpartial = xgb.DMatrix(X_temp[:, feature_ids])
y_partial.append(np.average(bst.predict(dpartial)))
y_partial = np.array(y_partial)
# Plot partial dependence
fig, ax = plt.subplots()
fig.set_size_inches(5, 5)
plt.subplots_adjust(left = 0.17, right = 0.94, bottom = 0.15, top = 0.9)
ax.plot(x_scan, y_partial, '-', color = 'black', linewidth = 1)
ax.plot(X[:, f_id], y, 'o', color = 'blue', alpha = 0.02)
ax.set_xlim(min(x_scan), max(x_scan))
ax.set_xlabel('Feature X', fontsize = 10)
ax.set_ylabel('Partial Dependence', fontsize = 12)
æå°ããããšãããããŸãïŒ ç§ã¯ããããã§ã°ãããééããããããšã«æ°ã¥ããŸããã ãããåå€éããŒã¿ã®å¥ã®ãã¹ãã§ããããããã¯åé¡ãªãããã§ãã
set.seed(1024)
x = rnorm(1000, 10)
y = -1*x + rnorm(1000, 0.001) + 3*sin(x)
train = matrix(x, ncol = 1)
bst = xgboost(data = train, label = y, max_depth = 2,
eta = 0.1, nthread = 2, nrounds = 100,
monotone_constraints = '(-1)')
pred = predict(bst, train)
ind = order(train[,1])
pred.ord = pred[ind]
plot(train[,1], y, main = 'with constraint', pch=20)
lines(train[ind,1], pred.ord, col=2, lwd = 5)
bst = xgboost(data = train, label = y, max_depth = 2,
eta = 0.1, nthread = 2, nrounds = 100)
pred = predict(bst, train)
ind = order(train[,1])
pred.ord = pred[ind]
plot(train[,1], y, main = 'without constraint', pch=20)
lines(train[ind,1], pred.ord, col=2, lwd = 5)
@ hetong007ãããã£ãŠãRã€ã³ã¿ãŒãã§ãŒã¹ã®ç®æšã¯ããŠãŒã¶ãŒãæååã®ã»ãã«Ré åãæž¡ããããã«ããããšã§ãã
monotone_constraints=c(1,-1)
ãã¥ãŒããªã¢ã«ã®PRæã«ãç¥ãããã ãã
@ hetong007ããªããr-bloggerããŒãžã§ã³ãäœãããšãæè¿ããŸã
@tqchenç³ãèš³ãããŸããããç§ã¯1é±éåºåŒµããŸããã
å調ãªå¶çŽã®ãã¥ãŒããªã¢ã«ã®ããã«ãããã€ãã®ãã«ãªã¯ãšã¹ããéä¿¡ããŸããã ãæèŠããèãããã ãããæ¹å€ãæ¹è©ã¯ãããããã§ãã
ããŸãããã°ãããã§ãããå°ããã®ãé©åã§ãïŒéåžžã®git clone --recursive https://github.com/dmlc/xgboost
ã䜿çšããŠæŽæ°ãããšãããã¯æ©èœããŸããïŒ
æ°ãããã¥ãŒããªã¢ã«ãèŠããšãã«å°ããŸãããã³ãŒãèªäœã®å€æŽã«ã€ããŠã¯äœãæ°ããããšã¯ãããŸããã çãããããããšãããããŸããïŒ
ã¯ãããã¥ãŒããªã¢ã«ãããŒãžãããåã«ãæ°æ©èœãããŒãžãããŸã
ããã«ã¡ã¯ã
ç§ãããªãã®ã³ãŒãã§èŠããã®ãããããªããã°ããŒãã«ãªå調æ§ãããŸãå®è£ ãããã©ããã¯ããããŸããããããã¯ããŒã«ã«ãªå調æ§ã«ãã£ãšå¯Ÿå¿ããŠããŸãã
å調æ§ãç Žãç°¡åãªäŸã次ã«ç€ºããŸãã
`
df <-data.frameïŒy = cïŒ2ãrepïŒ6,100ïŒã1ãrepïŒ11,100ïŒïŒã
x1 = cïŒrepïŒ1,101ïŒãrepïŒ2,101ïŒïŒãx2 = cïŒ1ãrepïŒ2,100ïŒã1ãrepïŒ2,100ïŒïŒïŒ
ã©ã€ãã©ãªïŒxgboostïŒ
set.seedïŒ0ïŒ
XGB <-xgboostïŒdata = data.matrixïŒdf [ã-1]ïŒãlabel = df [ã1]ã
Objective = " regïŒlinear "ã
bag.fraction = 1ãnround = 100ãmonotone_constraints = cïŒ1,0ïŒã
eta = 0.1ïŒ
sans_corr <-data.frameïŒx1 = cïŒ1,2,1,2ïŒãx2 = cïŒ1,1,2,2ïŒïŒ
sans_corr $ prediction <-predictïŒXGBãdata.matrixïŒsans_corrïŒïŒ
`
ããªãã®ã³ãŒããšç§ã®äŸãééã£ãŠããªãããšãç解ããŠãã ãã
çŸåšããã®æ©èœã¯SklearnAPIã«ã¯ãããŸããã ããªããŸãã¯èª°ãããããè¿œå ããã®ãæäŒã£ãŠãããŸããïŒ ããããšãïŒ
å€æ°ãå¢å ããããæžå°ãããããæå®ããã«ãå€æ°ã«äžè¬çãªå調æ§ãé©çšããããšã¯å¯èœã§ããïŒ
@davidADSPã¯ãç®çã®äºæž¬åãšã¿ãŒã²ããã«å¯ŸããŠã¹ãã¢ãã³ã®çžé¢ãã§ãã¯ãå®è¡ããŠãå¢å ãŸãã¯æžå°ãé©åãã©ããã確èªã§ããŸãã
'tree_method'ïŒ 'hist'ã®å Žåããã®æ©èœã¯ç¡å¹ã®ããã§ãã @tqchenäœãå©ãããããŸããïŒ çããããããšãã
mloglossã®ãããªãã«ãã¯ã©ã¹ã®ç®çã«å¯ŸããŠå¶çŽã¯ã©ã®ããã«æ©èœããŸããïŒ ãã«ãã¯ã©ã¹æ倱ã«å¯ŸããŠå調æ§å¶çŽã¯ãµããŒããããŠããŸããïŒ ã¯ãã®å Žåãããã¯ã©ã®ããã«å®æœãããŸããã ïŒåã¯ã©ã¹ã«ã¯æšããããŸãïŒ
XGBOOSTã§å®æœãããŠããMonoticityAlgorithmã«é¢ãããã¯ã€ãããŒããŒã¯ãããŸããïŒ ã°ããŒãã«ã§ããããããšãããŒã«ã«ã§ããïŒ ããŒã«ã«ãšã¯ãç¹å®ã®ããŒãã«åºæã§ãããããªãŒã®ä»ã®éšåã®ããŒãã¯ãå šäœçãªå調æ§ã®éåãåŒãèµ·ããå¯èœæ§ããããŸãã ãŸãã L412-417è¡ãç解ããã®ãæäŒã£ãŠããã人ã¯ããŸã
æãåèã«ãªãã³ã¡ã³ã
çŸåšããã®æ©èœã¯SklearnAPIã«ã¯ãããŸããã ããªããŸãã¯èª°ãããããè¿œå ããã®ãæäŒã£ãŠãããŸããïŒ ããããšãïŒ