λ¬Έμ
νμ¬ μ€ν μμλΈμλ CVμ λν μ체 μ€μ μ΄ μμ΅λλ€. IterativeAlgorithm
λ _make_stacked_ensembler
util μ νΈμΆνμ§λ§ νμ¬ automl κ²μμμ λ°μ΄ν° μ€ν리ν°λ₯Ό ν΅ν΄ μ€λ λνμ§ μμ΅λλ€.
μ€νν μμλΈλ¬μ κΈ°λ³Έμ μΌλ‘ μ€μ λ λ°μ΄ν° μ€νλ¦¬ν° λ shuffle=True
μ€μ νμ§ μμ΅λλ€. μ΄λ μ
λ ₯ λ°μ΄ν° μΈνΈμ μμκ° μλ κ²½μ° μ±λ₯μ΄ μ νλ μ μμ΅λλ€ . λν n_folds
μ κ°μ λ€λ₯Έ 맀κ°λ³μμ λν΄ λμΌν μ€μ μ κ°μ§ μμ΅λλ€. μ΄λ μ΄μμ μ΄μ§ μμ΅λλ€.
λν μ΄ μ°¨μ΄ λ‘ μΈν΄ sklearn 0.24.0 μ μ§μνμ§ λͺ»ν©λλ€ . μ΄ λ¬Έμ λ₯Ό μμ νλ©΄ ν΄λΉ λ²μ μ μ§μν μ μμ΅λλ€.
κ³ μΉλ€
automlμ΄ IterativeAlgorithm
ν΅ν΄ λ°μ΄ν° μ€ν리ν°λ₯Ό μ€ν μμλΈλ¬λ‘ μ λ¬νλλ‘ ν©μλ€.
@angela97lin λ΄ μ€λͺ μ΄ μ΄ν΄κ° λ©λκΉ? / μ€ννΉμ μ€μ ν λ μ΄κ²μ νμ§ μκΈ°λ‘ μ νν μ΄μ κ° μμλμ? :)
@dsherry λλ λΉμ μ μ€λͺ
μ΄ μλ―Έκ° μλ€κ³ μκ°ν©λλ€! IIRCλ μ€ννΉμ μ€μ νκ³ λ μ±λ₯μ λμ΄λ €κ³ ν λ / μ€ννΉμ λ λΉ λ₯΄κ² μ€ννλ €κ³ ν λ λ무 λ§μ΄ μ νμ§ μμ κ²μΌλ‘ κΈ°λ³Έ μ€μ νκ³ μΆμμ΅λλ€. λ°λΌμ self._default_cv(n_splits=3, random_state=random_state)
λΌμΈ κΈ°λ³Έκ°μ scikit-learnμ μν΄ μ§μ λκ³ n_splits
λ 3μΌλ‘ νλμ½λ©λ©λλ€.
μ΄μ λν΄ μ’ λ νκ³ λ€μ΄ AutoMLμμ μ¬μ©νλ λ°μ΄ν° λΆν λ°©λ²μ μ€ν μμλΈ κ΅¬μ± μμμ μ§λ €κ³ νμ΅λλ€. κ·Έλ¬λ μ΄ λ¬Έμ κ° λ°μνμ΅λλ€( TrainingValidationSplit
ν΄λμ€λ₯Ό μλμν€λ λ° νμν API μ
λ°μ΄νΈλ₯Ό μ²λ¦¬ν ν).
estimator = WrappedSKClassifier(pipeline=LogisticRegressionBinaryPipeline(parameters={'Imputer':{'categorical_impute_strategy': 'm...Logistic Regression Classifier':{'penalty': 'l2', 'C': 1.0, 'n_jobs': -1, 'multi_class': 'auto', 'solver': 'lbfgs'},}))
X = 0 1 2 3 4
0 0.965469 0.041236 0.028701 0.659165 0.213375
1 0.043831...978 0.079577
48 0.376344 0.920154 0.314640 0.180086 0.197598
49 0.682661 0.046529 0.400513 0.412513 0.751464
y = array([1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0,
1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0,
1, 0, 1, 0, 1, 0])
<strong i="7">@_deprecate_positional_args</strong>
def cross_val_predict(estimator, X, y=None, *, groups=None, cv=None,
n_jobs=None, verbose=0, fit_params=None,
pre_dispatch='2*n_jobs', method='predict'):
"""Generate cross-validated estimates for each input data point
The data is split according to the cv parameter. Each sample belongs
to exactly one test set, and its prediction is computed with an
estimator fitted on the corresponding training set.
Passing these predictions into an evaluation metric may not be a valid
way to measure generalization performance. Results can differ from
:func:`cross_validate` and :func:`cross_val_score` unless all tests sets
have equal size and the metric decomposes over samples.
Read more in the :ref:`User Guide <cross_validation>`.
Parameters
----------
estimator : estimator object implementing 'fit' and 'predict'
The object to use to fit the data.
X : array-like of shape (n_samples, n_features)
The data to fit. Can be, for example a list, or an array at least 2d.
y : array-like of shape (n_samples,) or (n_samples, n_outputs), \
default=None
The target variable to try to predict in the case of
supervised learning.
groups : array-like of shape (n_samples,), default=None
Group labels for the samples used while splitting the dataset into
train/test set. Only used in conjunction with a "Group" :term:`cv`
instance (e.g., :class:`GroupKFold`).
cv : int, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 5-fold cross validation,
- int, to specify the number of folds in a `(Stratified)KFold`,
- :term:`CV splitter`,
- An iterable yielding (train, test) splits as arrays of indices.
For int/None inputs, if the estimator is a classifier and ``y`` is
either binary or multiclass, :class:`StratifiedKFold` is used. In all
other cases, :class:`KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validation strategies that can be used here.
.. versionchanged:: 0.22
``cv`` default value if None changed from 3-fold to 5-fold.
n_jobs : int, default=None
Number of jobs to run in parallel. Training the estimator and
predicting are parallelized over the cross-validation splits.
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary <n_jobs>`
for more details.
verbose : int, default=0
The verbosity level.
fit_params : dict, defualt=None
Parameters to pass to the fit method of the estimator.
pre_dispatch : int or str, default='2*n_jobs'
Controls the number of jobs that get dispatched during parallel
execution. Reducing this number can be useful to avoid an
explosion of memory consumption when more jobs get dispatched
than CPUs can process. This parameter can be:
- None, in which case all the jobs are immediately
created and spawned. Use this for lightweight and
fast-running jobs, to avoid delays due to on-demand
spawning of the jobs
- An int, giving the exact number of total jobs that are
spawned
- A str, giving an expression as a function of n_jobs,
as in '2*n_jobs'
method : {'predict', 'predict_proba', 'predict_log_proba', \
'decision_function'}, default='predict'
The method to be invoked by `estimator`.
Returns
-------
predictions : ndarray
This is the result of calling `method`. Shape:
- When `method` is 'predict' and in special case where `method` is
'decision_function' and the target is binary: (n_samples,)
- When `method` is one of {'predict_proba', 'predict_log_proba',
'decision_function'} (unless special case above):
(n_samples, n_classes)
- If `estimator` is :term:`multioutput`, an extra dimension
'n_outputs' is added to the end of each shape above.
See Also
--------
cross_val_score : Calculate score for each CV split.
cross_validate : Calculate one or more scores and timings for each CV
split.
Notes
-----
In the case that one or more classes are absent in a training portion, a
default score needs to be assigned to all instances for that class if
``method`` produces columns per class, as in {'decision_function',
'predict_proba', 'predict_log_proba'}. For ``predict_proba`` this value is
0. In order to ensure finite output, we approximate negative infinity by
the minimum finite float value for the dtype in other cases.
Examples
--------
>>> from sklearn import datasets, linear_model
>>> from sklearn.model_selection import cross_val_predict
>>> diabetes = datasets.load_diabetes()
>>> X = diabetes.data[:150]
>>> y = diabetes.target[:150]
>>> lasso = linear_model.Lasso()
>>> y_pred = cross_val_predict(lasso, X, y, cv=3)
"""
X, y, groups = indexable(X, y, groups)
cv = check_cv(cv, y, classifier=is_classifier(estimator))
splits = list(cv.split(X, y, groups))
test_indices = np.concatenate([test for _, test in splits])
if not _check_is_permutation(test_indices, _num_samples(X)):
> raise ValueError('cross_val_predict only works for partitions')
E ValueError: cross_val_predict only works for partitions
../venv/lib/python3.7/site-packages/sklearn/model_selection/_validation.py:845: ValueError
λ€μμ νΈμΆνλ €κ³ ν λ λ°μνλ μ€λ₯μ λλ€.
clf = StackedEnsembleClassifier(input_pipelines=[logistic_regression_binary_pipeline_class(parameters={})], cv=TrainingValidationSplit())
clf.fit(X, y)
κ·Έ μ΄μ λ scikit-learnμ΄ μ λ¬λ cvκ° μ€μ λ‘ κ΅μ°¨ κ²μ¦ λ°©λ²μμ κ²μ¦νκΈ° λλ¬Έμ
λλ€. TrainingValidationSplit
μ κ°μ λ¨μΌ λΆν μλ λ§μ‘±νμ§ μμ΅λλ€. μ¬κΈ°μ μΌλΆ λ°μ΄ν°λ ν
μ€νΈ λ°μ΄ν°μ μ λ ν¬ν¨λμ§ μμ΅λλ€(νλμ λΆν λ§ μκΈ° λλ¬Έμ).
λ°λΌμ νμ¬λ‘μλ scikit-learn 0.24λ₯Ό μ§μνκ³ κΈ°λ³Έ cvμ shuffle=True
μ€μ νλ κ²μ΄ κ°μ₯ μ’μ κ³νμ΄λΌκ³ μκ°ν©λλ€. μ°λ¦¬λ μ΄κ²μ΄ μ μ©ν μΌμ΄λΌκ³ μκ°νλ€λ©΄ μ΄κ²μ λ€μ μ΄ν΄λ³Ό μ μμ΅λλ€. μκ°, @dsherry?