Scikit-learn: λͺ¨λ“  속성이 λ¬Έμ„œν™”λ˜μ—ˆλŠ”μ§€ 확인

에 λ§Œλ“  2019λ…„ 07μ›” 12일  Β·  79μ½”λ©˜νŠΈ  Β·  좜처: scikit-learn/scikit-learn

# 13385에 μ„€λͺ… λœλŒ€λ‘œ λͺ¨λ“  속성이 λ¬Έμ„œν™”λ˜μ—ˆλŠ”μ§€ ν™•μΈν•΄μ•Όν•©λ‹ˆλ‹€.

이 μž‘μ—…μ„ν•˜λ €λ©΄ νŠΉμ • ν•˜μœ„ λͺ¨λ“ˆμ„ μ„ νƒν•˜κ³  ν•΄λ‹Ή ν•˜μœ„ λͺ¨λ“ˆμ˜ λͺ¨λ“  속성 λ¬Έμ„œ 뢈일치λ₯Ό μˆ˜μ •ν•΄μ•Όν•©λ‹ˆλ‹€.

λ‹€μŒμ€ λ‚˜λ¨Έμ§€ ν•­λͺ©μ„ μ°ΎλŠ” μŠ€ν¬λ¦½νŠΈμž…λ‹ˆλ‹€ (잘λͺ»λœ κΈμ •μ΄μžˆμ„ 수 있음).

import numpy as np
from sklearn.base import clone
from sklearn.utils.testing import all_estimators
from sklearn.utils.estimator_checks import pairwise_estimator_convert_X, enforce_estimator_tags_y
from numpydoc import docscrape

ests = all_estimators()

for name, Est in ests:
    try:
        estimator_orig = Est()
    except:
        continue
    rng = np.random.RandomState(0)
    X = pairwise_estimator_convert_X(rng.rand(40, 10), estimator_orig)
    X = X.astype(object)
    y = (X[:, 0] * 4).astype(np.int)
    est = clone(estimator_orig)
    y = enforce_estimator_tags_y(est, y)
    try:
        est.fit(X, y)
    except:
        continue
    fitted_attrs = [(x, getattr(est, x, None))
                    for x in est.__dict__.keys() if x.endswith("_")
                    and not x.startswith("_")]
    doc = docscrape.ClassDoc(type(est))
    doc_attributes = []
    incorrect = []
    for att_name, type_definition, param_doc in doc['Attributes']:
        if not type_definition.strip():
            if ':' in att_name and att_name[:att_name.index(':')][-1:].strip():
                incorrect += [name +
                              ' There was no space between the param name and '
                              'colon (%r)' % att_name]
            elif name.rstrip().endswith(':'):
                incorrect += [name +
                              ' Parameter %r has an empty type spec. '
                              'Remove the colon' % (att_name.lstrip())]

        if '*' not in att_name:
            doc_attributes.append(att_name.split(':')[0].strip('` '))
    assert incorrect == []
    fitted_attrs_names = [x[0] for x in fitted_attrs]

    bad = sorted(list(set(fitted_attrs_names) ^ set(doc_attributes)))
    if len(bad) > 0:
        msg = '{}\n'.format(name) + '\n'.join(bad)
        print("Docstring Error: Attribute mismatch in " + msg)


Documentation Easy good first issue help wanted

κ°€μž₯ μœ μš©ν•œ λŒ“κΈ€

각 좔정기에 λŒ€ν•œ 속성 독 슀트링 λˆ„λ½

PRμ—μ„œμ΄ 문제λ₯Ό μ°Έμ‘°ν•˜μ„Έμš”.

  • [x] ARDRegression, [절편 _]
  • [x] AdaBoostClassifier, [base_estimator_]
  • [x] AdaBoostRegressor, [base_estimator_]
  • [x] AdditiveChi2Sampler, [sample_interval_]
  • [x] AgglomerativeClustering, [n_components _] (더 이상 μ‚¬μš©λ˜μ§€ μ•ŠμŒ)
  • [x] BaggingClassifier, [n_features_]
  • [x] BaggingRegressor, [base_estimator_, n_features_]
  • [x] BayesianGaussianMixture, [mean_precision_prior, mean_precision_prior_]
  • [x] BayesianRidge, [X_offset_, X_scale_]
  • [x] BernoulliNB, [coef_, intercept_]
  • [x] BernoulliRBM, [h_samples_]
  • [] μžμž‘ λ‚˜λ¬΄, [fit_, partial_fit_]
  • [] CCA, [coef_, x_mean_, x_std_, y_mean_, y_std_]
  • [x] CheckingClassifier, [classes_]
  • [x] ComplementNB, [coef_, intercept_]
  • [x] CountVectorizer, [stop_words_, vocabulary_]
  • [] DecisionTreeRegressor, [classes_, n_classes_]
  • [x] DictVectorizer, [feature_names_, vocabulary_]
  • [] DummyClassifier, [output_2d_]
  • [] DummyRegressor, [output_2d_]
  • [] ElasticNet, [dual_gap_]
  • [] ElasticNetCV, [dual_gap_]
  • [] EllipticEnvelope, [dist_, raw_covariance_, raw_location_, raw_support_]
  • [x] ExtraTreeClassifier, [feature_importances_]
  • [] ExtraTreeRegressor, [classes_, feature_importances_, n_classes_]
  • [x] ExtraTreesClassifier, [base_estimator_]
  • [x] ExtraTreesRegressor, [base_estimator_]
  • [x] FactorAnalysis, [mean_]
  • [] FeatureAgglomeration, [n_components_]
  • [x] GaussianProcessClassifier, [base_estimator_]
  • [x] GaussianRandomProjection, [ꡬ성 μš”μ†Œ _]
  • [x] GradientBoostingClassifier, [max_features_, n_classes_, n_features_, oob_improvement_]
  • [x] GradientBoostingRegressor, [max_features_, n_classes_, n_estimators_, n_features_, oob_improvement_]
  • [x] HistGradientBoostingClassifier, [bin_mapper_, classes_, do_early_stopping_, loss_, n_features_, scorer_]
  • [x] HistGradientBoostingRegressor, [bin_mapper_, do_early_stopping_, loss_, n_features_, scorer_]
  • [x] IncrementalPCA, [batch_size_]
  • [x] IsolationForest, [base_estimator_, estimators_features_, n_features_]
  • [x] IsotonicRegression, [X_max_, X_min_, f_]
  • [x] IterativeImputer, [random_state_]
  • [x] KNeighborsClassifier, [classes_, effective_metric_, effective_metric_params_, outputs_2d_]
  • [x] KNeighborsRegressor, [유효 _ λ©”νŠΈλ¦­ _, 유효 _ λ©”νŠΈλ¦­ _params_]
  • [x] KernelCenterer, [K_fit_all_, K_fit_rows_]
  • [x] KernelDensity, [tree_]
  • [x] KernelPCA, [X_transformed_fit_, dual_coef_]
  • [x] LabelBinarizer, [classes_, sparse_input_, y_type_]
  • [x] LabelEncoder, [클래슀 _]
  • [x] LarsCV, [ν™œμ„± _]
  • [x] μ˜¬κ°€λ―Έ, [dual_gap_]
  • [x] LassoLarsCV, [ν™œμ„± _]
  • [x] LassoLarsIC, [alphas_]
  • [x] LatentDirichletAllocation, [bound_, doc_topic_prior_, exp_dirichlet_component_, random_state_, topic_word_prior_]
  • [x] LinearDiscriminantAnalysis, [곡뢄산 _]
  • [x] μ„ ν˜• νšŒκ·€, [μˆœμœ„ _, 특이 _]
  • [x] LinearSVC, [클래슀 _]
  • [x] LocalOutlierFactor, [effective_metric_, effective_metric_params_]
  • [x] MDS, [dissimilarity_matrix_, n_iter_]
  • [x] MLPClassifier, [best_loss_, loss_curve_, t_]
  • [x] MLPRegressor, [best_loss_, loss_curve_, t_]
  • [x] MinMaxScaler, [n_samples_seen_]
  • [x] MiniBatchDictionaryLearning, [iter_offset_]
  • [x] MiniBatchKMeans, [counts_, init_size_, n_iter_]
  • [x] MultiLabelBinarizer, [classes_]
  • [x] MultiTaskElasticNet, [dual_gap_, eps_, sparse_coef_]
  • [x] MultiTaskElasticNetCV, [dual_gap_]
  • [x] MultiTaskLasso, [dual_gap_, eps_, sparse_coef_]
  • [x] MultiTaskLassoCV, [dual_gap_]
  • [x] NearestCentroid, [classes_]
  • [x] NearestNeighbors, [effective_metric_, effective_metric_params_]
  • [x] NeighborhoodComponentsAnalysis, [random_state_]
  • [x] NuSVC, [class_weight_, fit_status_, probA_, probB_, shape_fit_]
  • [] NuSVR, [class_weight_, fit_status_, n_support_, probA_, probB_, shape_fit_]
  • [x] OAS, [location_]
  • [] OneClassSVM, [class_weight_, fit_status_, n_support_, probA_, probB_, shape_fit_]
  • [x] OneVsOneClassifier, [n_classes_]
  • [x] OneVsRestClassifier, [coef_, intercept_, n_classes_]
  • [x] OrthogonalMatchingPursuit, [n_nonzero_coefs_]
  • [] PLSCanonical, [coef_, x_mean_, x_std_, y_mean_, y_std_]
  • [x] PLSRegression, [x_mean_, x_std_, y_mean_, y_std_]
  • [] PLSSVD, [x_mean_, x_std_, y_mean_, y_std_]
  • [x] PassiveAggressiveClassifier, [loss_function_, t_]
  • [x] PassiveAggressiveRegressor, [t_]
  • [x] νΌμ…‰νŠΈλ‘ , [loss_function_]
  • [x] QuadraticDiscriminantAnalysis, [classes_, covariance_]
  • [x] RBFSampler, [random_offset_, random_weights_]
  • [] RFE, [classes_]
  • [] RFECV, [클래슀 _]
  • [x] RadiusNeighborsClassifier, [classes_, effective_metric_, effective_metric_params_, outputs_2d_]
  • [x] RadiusNeighborsRegressor, [effective_metric_, effective_metric_params_]
  • [x] RandomForestClassifier, [oob_decision_function_, oob_score_]
  • [x] RandomForestRegressor, [oob_prediction_, oob_score_]
  • [x] RandomTreesEmbedding, [base_estimator_, feature_importances_, n_features_, n_outputs_, one_hot_encoder_]
  • [x] RidgeCV, [cv_values_]
  • [x] RidgeClassifier, [classes_]
  • [x] RidgeClassifierCV, [cv_values_]
  • [x] SGDClassifier, [classes_, t_]
  • [x] SGDRegressor, [average_coef_, average_intercept_]
  • [x] SVC, [class_weight_, shape_fit_]
  • [] SVR, [class_weight_, fit_status_, n_support_, probA_, probB_, shape_fit_]
  • [x] SelectKBest, [pvalues_, scores_]
  • [x] ShrunkCovariance, [μˆ˜μΆ•]
  • [x] SkewedChi2Sampler, [random_offset_, random_weights_]
  • [x] SparseRandomProjection, [ꡬ성 μš”μ†Œ _, 밀도 _]
  • [x] SpectralEmbedding, [n_neighbors_]
  • [x] TfidfVectorizer, [stop_words_, vocabulary_]

λͺ¨λ“  79 λŒ“κΈ€

NMF 클래슀 μ„€λͺ…μ˜ 속성 λ¬Έμ„œμ—μ„œ 이미 ν•˜λ‚˜ μ΄μƒμ˜ 뢈일치λ₯Ό λ°œκ²¬ν–ˆμŠ΅λ‹ˆλ‹€. 이 일을 μ’€ ν•  μˆ˜μžˆμ„ 것 κ°™μ•„μš”. decomposition 및 random_projection ν•˜μœ„ λͺ¨λ“ˆ λ‚΄μ—μ„œ λͺ‡ 가지 λ³€κ²½ 사항을 μ œμ•ˆ ν•  μ€€λΉ„κ°€ κ±°μ˜λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

각 좔정기에 λŒ€ν•œ 속성 독 슀트링 λˆ„λ½

PRμ—μ„œμ΄ 문제λ₯Ό μ°Έμ‘°ν•˜μ„Έμš”.

  • [x] ARDRegression, [절편 _]
  • [x] AdaBoostClassifier, [base_estimator_]
  • [x] AdaBoostRegressor, [base_estimator_]
  • [x] AdditiveChi2Sampler, [sample_interval_]
  • [x] AgglomerativeClustering, [n_components _] (더 이상 μ‚¬μš©λ˜μ§€ μ•ŠμŒ)
  • [x] BaggingClassifier, [n_features_]
  • [x] BaggingRegressor, [base_estimator_, n_features_]
  • [x] BayesianGaussianMixture, [mean_precision_prior, mean_precision_prior_]
  • [x] BayesianRidge, [X_offset_, X_scale_]
  • [x] BernoulliNB, [coef_, intercept_]
  • [x] BernoulliRBM, [h_samples_]
  • [] μžμž‘ λ‚˜λ¬΄, [fit_, partial_fit_]
  • [] CCA, [coef_, x_mean_, x_std_, y_mean_, y_std_]
  • [x] CheckingClassifier, [classes_]
  • [x] ComplementNB, [coef_, intercept_]
  • [x] CountVectorizer, [stop_words_, vocabulary_]
  • [] DecisionTreeRegressor, [classes_, n_classes_]
  • [x] DictVectorizer, [feature_names_, vocabulary_]
  • [] DummyClassifier, [output_2d_]
  • [] DummyRegressor, [output_2d_]
  • [] ElasticNet, [dual_gap_]
  • [] ElasticNetCV, [dual_gap_]
  • [] EllipticEnvelope, [dist_, raw_covariance_, raw_location_, raw_support_]
  • [x] ExtraTreeClassifier, [feature_importances_]
  • [] ExtraTreeRegressor, [classes_, feature_importances_, n_classes_]
  • [x] ExtraTreesClassifier, [base_estimator_]
  • [x] ExtraTreesRegressor, [base_estimator_]
  • [x] FactorAnalysis, [mean_]
  • [] FeatureAgglomeration, [n_components_]
  • [x] GaussianProcessClassifier, [base_estimator_]
  • [x] GaussianRandomProjection, [ꡬ성 μš”μ†Œ _]
  • [x] GradientBoostingClassifier, [max_features_, n_classes_, n_features_, oob_improvement_]
  • [x] GradientBoostingRegressor, [max_features_, n_classes_, n_estimators_, n_features_, oob_improvement_]
  • [x] HistGradientBoostingClassifier, [bin_mapper_, classes_, do_early_stopping_, loss_, n_features_, scorer_]
  • [x] HistGradientBoostingRegressor, [bin_mapper_, do_early_stopping_, loss_, n_features_, scorer_]
  • [x] IncrementalPCA, [batch_size_]
  • [x] IsolationForest, [base_estimator_, estimators_features_, n_features_]
  • [x] IsotonicRegression, [X_max_, X_min_, f_]
  • [x] IterativeImputer, [random_state_]
  • [x] KNeighborsClassifier, [classes_, effective_metric_, effective_metric_params_, outputs_2d_]
  • [x] KNeighborsRegressor, [유효 _ λ©”νŠΈλ¦­ _, 유효 _ λ©”νŠΈλ¦­ _params_]
  • [x] KernelCenterer, [K_fit_all_, K_fit_rows_]
  • [x] KernelDensity, [tree_]
  • [x] KernelPCA, [X_transformed_fit_, dual_coef_]
  • [x] LabelBinarizer, [classes_, sparse_input_, y_type_]
  • [x] LabelEncoder, [클래슀 _]
  • [x] LarsCV, [ν™œμ„± _]
  • [x] μ˜¬κ°€λ―Έ, [dual_gap_]
  • [x] LassoLarsCV, [ν™œμ„± _]
  • [x] LassoLarsIC, [alphas_]
  • [x] LatentDirichletAllocation, [bound_, doc_topic_prior_, exp_dirichlet_component_, random_state_, topic_word_prior_]
  • [x] LinearDiscriminantAnalysis, [곡뢄산 _]
  • [x] μ„ ν˜• νšŒκ·€, [μˆœμœ„ _, 특이 _]
  • [x] LinearSVC, [클래슀 _]
  • [x] LocalOutlierFactor, [effective_metric_, effective_metric_params_]
  • [x] MDS, [dissimilarity_matrix_, n_iter_]
  • [x] MLPClassifier, [best_loss_, loss_curve_, t_]
  • [x] MLPRegressor, [best_loss_, loss_curve_, t_]
  • [x] MinMaxScaler, [n_samples_seen_]
  • [x] MiniBatchDictionaryLearning, [iter_offset_]
  • [x] MiniBatchKMeans, [counts_, init_size_, n_iter_]
  • [x] MultiLabelBinarizer, [classes_]
  • [x] MultiTaskElasticNet, [dual_gap_, eps_, sparse_coef_]
  • [x] MultiTaskElasticNetCV, [dual_gap_]
  • [x] MultiTaskLasso, [dual_gap_, eps_, sparse_coef_]
  • [x] MultiTaskLassoCV, [dual_gap_]
  • [x] NearestCentroid, [classes_]
  • [x] NearestNeighbors, [effective_metric_, effective_metric_params_]
  • [x] NeighborhoodComponentsAnalysis, [random_state_]
  • [x] NuSVC, [class_weight_, fit_status_, probA_, probB_, shape_fit_]
  • [] NuSVR, [class_weight_, fit_status_, n_support_, probA_, probB_, shape_fit_]
  • [x] OAS, [location_]
  • [] OneClassSVM, [class_weight_, fit_status_, n_support_, probA_, probB_, shape_fit_]
  • [x] OneVsOneClassifier, [n_classes_]
  • [x] OneVsRestClassifier, [coef_, intercept_, n_classes_]
  • [x] OrthogonalMatchingPursuit, [n_nonzero_coefs_]
  • [] PLSCanonical, [coef_, x_mean_, x_std_, y_mean_, y_std_]
  • [x] PLSRegression, [x_mean_, x_std_, y_mean_, y_std_]
  • [] PLSSVD, [x_mean_, x_std_, y_mean_, y_std_]
  • [x] PassiveAggressiveClassifier, [loss_function_, t_]
  • [x] PassiveAggressiveRegressor, [t_]
  • [x] νΌμ…‰νŠΈλ‘ , [loss_function_]
  • [x] QuadraticDiscriminantAnalysis, [classes_, covariance_]
  • [x] RBFSampler, [random_offset_, random_weights_]
  • [] RFE, [classes_]
  • [] RFECV, [클래슀 _]
  • [x] RadiusNeighborsClassifier, [classes_, effective_metric_, effective_metric_params_, outputs_2d_]
  • [x] RadiusNeighborsRegressor, [effective_metric_, effective_metric_params_]
  • [x] RandomForestClassifier, [oob_decision_function_, oob_score_]
  • [x] RandomForestRegressor, [oob_prediction_, oob_score_]
  • [x] RandomTreesEmbedding, [base_estimator_, feature_importances_, n_features_, n_outputs_, one_hot_encoder_]
  • [x] RidgeCV, [cv_values_]
  • [x] RidgeClassifier, [classes_]
  • [x] RidgeClassifierCV, [cv_values_]
  • [x] SGDClassifier, [classes_, t_]
  • [x] SGDRegressor, [average_coef_, average_intercept_]
  • [x] SVC, [class_weight_, shape_fit_]
  • [] SVR, [class_weight_, fit_status_, n_support_, probA_, probB_, shape_fit_]
  • [x] SelectKBest, [pvalues_, scores_]
  • [x] ShrunkCovariance, [μˆ˜μΆ•]
  • [x] SkewedChi2Sampler, [random_offset_, random_weights_]
  • [x] SparseRandomProjection, [ꡬ성 μš”μ†Œ _, 밀도 _]
  • [x] SpectralEmbedding, [n_neighbors_]
  • [x] TfidfVectorizer, [stop_words_, vocabulary_]

λ‹€μŒμ„ ν¬ν•¨ν•˜λŠ” tree ν•˜μœ„ λͺ¨λ“ˆ 속성 λ¬Έμ„œ 뢈일치 문제λ₯Ό ν•΄κ²°ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

  • DecisionTreeRegressor, [classes_, n_classes_]
  • ExtraTreeClassifier, [classes_, max_features_, n_classes_, n_features_, n_outputs_, tree_]
  • ExtraTreeRegressor, [classes_, max_features_, n_classes_, n_features_, n_outputs_, tree_]

μ €λŠ” LinearRegression, [rank_, singular_] μž‘μ—… μ€‘μž…λ‹ˆλ‹€.

LinearSVC, [n_iter_] 및 LinearSVR, [n_iter_] μž‘μ—… μ€‘μž…λ‹ˆλ‹€.

λ‚˜λŠ” Gradient boosting ie

  • GradientBoostingClassifier [base_estimator_, max_features_, n_classes_, n_features_]

    • GradientBoostingRegressor [base_estimator_, classes_, max_features_, n_estimators_, n_features_]

μ‹ κ²½ 쓰지 λ§ˆμ„Έμš”, 속성이 λˆ„λ½ 된 λΆ€λΆ„κ³Ό 그렇지 μ•Šμ€ 뢀뢄을 잘λͺ» 읽음

naive_bayes ν•˜μœ„ λͺ¨λ“ˆμ˜ λΆ„λ₯˜ μžμ— λŒ€ν•΄ λ¬Έμ„œν™”λ˜μ§€ μ•Šμ€ classes_ μ†μ„±λ„μžˆλŠ” 것 κ°™μŠ΅λ‹ˆλ‹€. λ‚˜λŠ” 그것을 고치기 μ‹œμž‘ν–ˆμŠ΅λ‹ˆλ‹€.

TfidfVectorizer, [fixed_vocabulary_] μž‘μ—… ν•  κ²ƒμž…λ‹ˆλ‹€.

λ‚˜λŠ” 일할 것이닀 :

  • RandomForestClassifier, [base_estimator_]
  • RandomForestRegressor, [base_estimator_, n_classes_]
  • ExtraTreesClassifier, [base_estimator_]
  • ExtraTreesRegressor, [base_estimator_, n_classes_]

λ‚˜λŠ” μΌν•˜κ³ μžˆλ‹€ :

  • SGDClassifier, [average_coef_, average_intercept_, standard_coef_, standard_intercept_]
  • SGDRegressor, [standard_coef_, standard_intercept_]

νŽΈμ§‘ : μ΄λŸ¬ν•œ 속성을 κ³΅κ°œμ—μ„œ λΉ„κ³΅κ°œλ‘œ λ³€κ²½ν•˜λŠ” 문제λ₯Ό μ—΄μ—ˆμŠ΅λ‹ˆλ‹€ (μ°Έμ‘° : # 14364).

λ‚˜λŠ” μΌν•˜κ³ μžˆλ‹€ :
KernelCenterer, [K_fit_all_, K_fit_rows_]
MinMaxScaler, [n_samples_seen_]

λ‚˜λŠ” 일할 것이닀 :

  • RandomTreesEmbedding, [base_estimator_, classes_, feature_importances_, n_classes_, n_features_, n_outputs_, one_hot_encoder_]

λ˜ν•œ KNeighborsClassifier , KNeighborsRegressor 및 neighbors λͺ¨λ“ˆμ˜ λ‹€λ₯Έ ν΄λž˜μŠ€μ— 속성 λ¬Έμ„œκ°€ μ „ν˜€ μ—†μŒμ„ λ°œκ²¬ν–ˆμŠ΅λ‹ˆλ‹€. ν˜„μž¬ 2 개의 μ†μ„±μ΄μžˆλŠ” KNeighborsRegressor 쀑 :

  • effective_metric_
  • effective_metric_params_

KNeighborsClassifier ν΄λž˜μŠ€μ—λŠ” λ„€ 가지 속성이 μžˆμŠ΅λ‹ˆλ‹€.

  • classes_
  • effective_metric_
  • effective_metric_params_
  • outputs_2d_

@alexitkes 쒋은 캐치. 감사!

QuadraticDiscriminantAnalysis μž‘μ—…, [classes_, covariance_]

KNeighborsClassifier μž‘μ—…, [classes_, effective_metric_, effective_metric_params_, outputs_2d_]
RadiusNeighborsClassifier, [classes_, effective_metric_, effective_metric_params_, outputs_2d_]

μž‘μ—…:
LinearSVC, [클래슀 _]
NuSVC, [class_weight_, classes_, fit_status_, probA_, probB_, shape_fit_]
SVC, [class_weight_, classes_, shape_fit_]

μž‘μ—…:

  • [] BaggingClassifier, [n_features_, oob_decision_function_, oob_score_]
  • [] BaggingRegressor, [base_estimator_, n_features_, oob_prediction_, oob_score_]
  • [] AdaBoostClassifier, [base_estimator_]
  • [] AdaBoostRegressor, [base_estimator_]

μž‘μ—…:

CountVectorizer, [stop_words_, vocabulary_]
DictVectorizer, [feature_names_, vocabulary_]

μ•ˆλ…• !! 이 문제λ₯Ό 도와 λ“œλ¦¬κ³  μ‹ΆμŠ΅λ‹ˆλ‹€. λˆ„κ΅¬λ“ μ§€ μ–΄λ””μ„œλΆ€ν„° μ‹œμž‘ν•΄μ•Όν•˜λŠ”μ§€ 말해 쀄 수 μžˆλ‚˜μš” ??

dict_learning.py @spbail 의 ν•¨μˆ˜λ₯Ό μž‘μ—… μ€‘μž…λ‹ˆλ‹€.

@ olgadk7을 μ‚¬μš©ν•˜μ—¬ LinearDiscriminantAnalysis μž‘μ—…

RidgeClassifierCV @ npatta01의 속성 뢈일치 μž‘μ—…

@ ingrid88 + @ npatta01둜 DecisionTreeRegressor μž‘μ—…

@ olgadk7을 μ‚¬μš©ν•˜μ—¬ LinearDiscriminantAnalysis μž‘μ—…

μœ„μ˜ 속성 μŠ€ν¬λ¦½νŠΈμ— λŒ€ν•΄ 였 탐지. 이것은 λ¬Έμ„œν™”λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

@ olgadk7둜 AdditiveChi2Sampler μž‘μ—…

@eugeniaft둜 LabelEncoder μž‘μ—…

randomtreeclassifierμ—μ„œ μž‘μ—…μ„ μ‹œλ„ν•©λ‹ˆλ‹€!

μž‘μ—…

νΌμ…‰νŠΈλ‘ 

BernoulliRBM μž‘μ—…

ExtraTreeClassifer μž‘μ—…

@eugeniaft둜 LabelEncoder μž‘μ—…

LabelEncoderλŠ” λΆˆμΌμΉ˜κ°€μ—†λŠ” 것 κ°™μŠ΅λ‹ˆλ‹€. OneClassSVMμ—μ„œ μž‘μ—… μ€‘μž…λ‹ˆλ‹€.

트리 νšŒκ·€μžλŠ” λŒ€μ‹  클래슀λ₯Ό νκΈ°ν•΄μ•Όν•œλ‹€κ³  μƒκ°ν•©λ‹ˆλ‹€.

SVR μž‘μ—…

μž‘μ—…:

  • OneVsOneClassifier, [n_classes_]
  • OneVsRestClassifier, [coef_, intercept_, n_classes_]

LinearRegression, [rank_, singular_] μž‘μ—…

LatentDirichletAllocation μž‘μ—…, [bound_, doc_topic_prior_, exp_dirichlet_component_, random_state_, topic_word_prior_]

μž‘μ—…
BaggingClassifier, [n_features_, oob_decision_function_, oob_score_]
BaggingRegressor, [base_estimator_, n_features_, oob_prediction_, oob_score_]

BaggingClassifier, [n_features_, oob_decision_function_, oob_score_]
BaggingRegressor, [base_estimator_, n_features_, oob_prediction_, oob_score_]
oob_ 속성은 PR # 14779의 μ£Όμ†Œμ΄κ³ , n_features_ & base_estimator_λŠ” 였 νƒμ§€μž…λ‹ˆλ‹€.

μž‘μ—…
AdaBoostClassifier, [base_estimator_]

μ—…λ°μ΄νŠΈ : https://github.com/scikit-learn/scikit-learn/pull/14477 μ—μ„œ 이미 μˆ˜μ •λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

λ‹€μŒ μŠ€ν”„λ¦°νŠΈμ—μ΄ 문제λ₯Ό ꢌμž₯ν•˜μ§€ μ•Šκ±°λ‚˜ 훨씬 더 선별 된 버전을 μ‚¬μš©ν•΄μ„œλŠ” μ•ˆλœλ‹€κ³  μƒκ°ν•©λ‹ˆλ‹€.

이전 μŠ€ν”„λ¦°νŠΈμ— λŒ€ν•œ λ‚΄ κ²½ν—˜μ— λ”°λ₯΄λ©΄ μ—¬μ „νžˆ λ§Žμ€ 였 탐지가 있으며, κ²°κ΅­μ—λŠ” κΈ°μ—¬μžλ“€μ—κ²Œ 곡개 속성을 λΉ„κ³΅κ°œλ‘œ λ§Œλ“€κΈ° μœ„ν•΄ μ‹€μ œλ‘œ λΉ„λ‚œν•˜λ„λ‘ μš”μ²­ν•˜κ²Œλ˜λŠ”λ°, μ΄λŠ” 틀림없이 훨씬 더 μ–΄λ ΅μŠ΅λ‹ˆλ‹€. 아무것도).

ν•‘ @amueller @thomasjpfan WDYT?

λ‹€μŒ μŠ€ν”„λ¦°νŠΈμ—μ΄ 문제λ₯Ό ꢌμž₯ν•˜μ§€ μ•Šκ±°λ‚˜ 훨씬 더 선별 된 버전을 μ‚¬μš©ν•΄μ„œλŠ” μ•ˆλœλ‹€κ³  μƒκ°ν•©λ‹ˆλ‹€.

https://github.com/numpy/numpydoc/issues/213 μ—μ„œ μ œμ•ˆν•œ 것과 같은 docstring에 λŒ€ν•œ 일반적인 μœ νš¨μ„± 검사 도ꡬ가 μžˆλ‹€λ©΄ κΈ°μ—¬μžμ—κ²Œ μ’€ 더 μ‰¬μšΈ κ²ƒμž…λ‹ˆλ‹€. 일뢀 속성이 κ³΅κ°œλ˜μ–΄μ„œλŠ” μ•ˆλ˜μ§€λ§Œ κ³΅κ°œλœλ‹€λŠ” 사싀을 μ™„μ „νžˆ λ‹€λ£¨μ§€λŠ” μ•ŠλŠ”λ‹€λŠ” 데 λ™μ˜ν•©λ‹ˆλ‹€.

TfidfVectorizer, SpectralEmbedding, SparseRandomProjection 이 μ—…λ°μ΄νŠΈλ˜μ—ˆμŠ΅λ‹ˆλ‹€.

이 문제λ₯Ό 첫 번째 문제둜 μƒκ°ν•˜κ³  μžˆμ—ˆμ§€λ§Œ μŠ€ν¬λ¦½νŠΈμ— λ‚˜μ—΄λœ ν•˜μœ„ λͺ¨λ“ˆμ„ λ¬΄μž‘μœ„λ‘œ μ„ νƒν•œ ν›„ 잘λͺ» λ¬Έμ„œν™” 된 κ²ƒμœΌλ‘œ 찾은 μœ μΌν•œ ν΄λž˜μŠ€λŠ” PLS * ν΄λž˜μŠ€μž…λ‹ˆλ‹€. κ·ΈλŸ¬λ‚˜ 그듀은 λΉ„κ³΅κ°œλ‘œ λ³΄μ΄λŠ” _pls_.py νŒŒμΌμ— μ‚΄κ³  μžˆμŠ΅λ‹ˆλ‹€. μž‘μ—…μ„ν•΄μ•Όν•©λ‹ˆκΉŒ μ•„λ‹ˆλ©΄ λ‹€λ₯Έ 쒋은 첫 번째 문제λ₯Ό μ°Ύμ•„μ•Όν•©λ‹ˆκΉŒ?

μ‹€μ œ μˆ˜μ—…μ΄ κ³΅κ°œλ˜λŠ” ν•œ 자격이 μžˆμŠ΅λ‹ˆλ‹€. 곡개 μˆ˜μ—…μ€ doc/modules/classes.rst λ‚˜μ—΄λ©λ‹ˆλ‹€. PLS * ν΄λž˜μŠ€κ°€ μžˆμœΌλ―€λ‘œ 자유둭게 λ¬Έμ„œν™”ν•˜μ‹­μ‹œμ˜€.

λͺ¨λ“  속성을 μ•ŒνŒŒλ²³μˆœμœΌλ‘œ ν‘œμ‹œν•˜λŠ” 것이 ν•©λ¦¬μ μž…λ‹ˆκΉŒ? μ„Ήμ…˜μ— ꡬ쑰λ₯Ό μ œκ³΅ν•˜κ³  μ„Ήμ…˜μ„ 읽기 μ‰½κ²Œ λ§Œλ“€ 것이라고 μƒκ°ν•©λ‹ˆλ‹€.

@pwalchessen λ™μ˜ν•©λ‹ˆλ‹€, 쒋은 μƒκ°μ²˜λŸΌ λ“€λ¦½λ‹ˆλ‹€. 직접 μ–ΈκΈ‰ν–ˆλ“―μ΄ λ‚˜λŠ” 그것을 ν…ŒμŠ€νŠΈμ— μΆ”κ°€ ν•  κ²ƒμž…λ‹ˆλ‹€.

이것듀은 μ—¬μ „νžˆ ​​열렀 있고 λ‹€μ†Œ λΆ„λͺ…ν•΄ λ³΄μž…λ‹ˆλ‹€.

Docstring Error: Attribute mismatch in RidgeCV
cv_values_
Docstring Error: Attribute mismatch in RidgeClassifier
classes_
Docstring Error: Attribute mismatch in RidgeClassifierCV
classes_
cv_values_
Docstring Error: Attribute mismatch in SkewedChi2Sampler
random_offset_
random_weights_
Docstring Error: Attribute mismatch in PLSCanonical
coef_
x_mean_
x_std_
y_mean_
y_std_
Docstring Error: Attribute mismatch in PLSRegression
x_mean_
x_std_
y_mean_
y_std_
Docstring Error: Attribute mismatch in PLSSVD
x_mean_
x_std_
y_mean_
y_std_
Docstring Error: Attribute mismatch in PassiveAggressiveClassifier
loss_function_
Docstring Error: Attribute mismatch in Perceptron
loss_function_
Docstring Error: Attribute mismatch in PolynomialFeatures
powers_
Docstring Error: Attribute mismatch in QuadraticDiscriminantAnalysis
covariance_
Docstring Error: Attribute mismatch in RBFSampler
random_offset_
random_weights_
Docstring Error: Attribute mismatch in RadiusNeighborsClassifier
n_samples_fit_
outlier_label_
Docstring Error: Attribute mismatch in RadiusNeighborsRegressor
n_samples_fit_
Docstring Error: Attribute mismatch in RadiusNeighborsTransformer
effective_metric_
effective_metric_params_
n_samples_fit_
Docstring Error: Attribute mismatch in ElasticNet
dual_gap_
sparse_coef_
Docstring Error: Attribute mismatch in ElasticNetCV
dual_gap_
Docstring Error: Attribute mismatch in EllipticEnvelope
dist_
raw_covariance_
raw_location_
raw_support_

그리고 더 λ§Žμ€ ...

μΆ”κ°€ν•΄μ•Όν•˜λŠ” λ―Έν•΄κ²° 속성 λͺ©λ‘μ΄ μ—…λ°μ΄νŠΈλ˜μ—ˆμŠ΅λ‹ˆλ‹€.

  • [] λ² μ΄μ§€μ•ˆ GaussianMixture

    • [x] mean_precision_prior

    • [] mean_precision_prior_

  • [] λ² μ΄μ§€μ•ˆ 릿지

    • [] X_ μ˜€ν”„μ…‹ _

    • [] X_ μŠ€μΌ€μΌ _

  • [] BernoulliNB

    • [] coef_ λ°°μ—΄

    • [] intercept_

  • [] μžμž‘ λ‚˜λ¬΄

    • [] fit_

    • [] partial_fit_

  • [] CCA

    • [] coef_ λ°°μ—΄, λͺ¨μ–‘ (1, n_features) λ˜λŠ” (n_classes, n_features); κ²°μ • ν•¨μˆ˜μ˜ κΈ°λŠ₯ κ³„μˆ˜μž…λ‹ˆλ‹€.

    • [] x_mean_ : λ°°μ—΄, λͺ¨μ–‘ (n_features,) κΈ°λŠ₯에 λŒ€ν•œ ν‰κ· μž…λ‹ˆλ‹€.

    • [] x_std_

    • [] y_mean_

    • [] y_std_

  • [x] CategoricalNB

    • [x] 클래슀 _ (클래슀 _ : λ°°μ—΄, λͺ¨μ–‘ (n_ 클래슀,)

      λΆ„λ₯˜ μžμ—κ²Œ μ•Œλ €μ§„ 클래슀 λ ˆμ΄λΈ” λͺ©λ‘μž…λ‹ˆλ‹€.

  • [] ComplementNB

    • [] coef_ : λ°°μ—΄, λͺ¨μ–‘ (1, n_features) λ˜λŠ” (n_classes, n_features); κ²°μ • ν•¨μˆ˜μ˜ κΈ°λŠ₯ κ³„μˆ˜μž…λ‹ˆλ‹€.

    • [] intercept_

  • [x] CountVectorizer

    • [x] 쀑지 _ 단어 _

    • [x] μ–΄νœ˜ _

  • [x] DecisionTreeClassifier

    • [x] feature_importances_

  • [] DecisionTreeRegressor

    • [] 클래슀 _ : λ°°μ—΄ ν˜•, λͺ¨μ–‘ (n_ 클래슀,); 고유 ν•œ 클래슀 λ ˆμ΄λΈ”

    • [] n_classes_ : int; 고유 ν•œ 클래슀 λ ˆμ΄λΈ” 수

    • [x] feature_importances_

  • [] DictVectorizer

    • [] feature_names_

    • [] μ–΄νœ˜ _

  • [] DummyClassifier

    • [] output_2d_

  • [] DummyRegressor

    • [] output_2d_

  • [] ElasticNet

    • [] dual_gap_

    • [] sparse_coef_

  • [] ElasticNetCV

    • [] dual_gap_

  • [] EllipticEnvelope

    • [] dist_

    • [] μ›μ‹œ _ 곡뢄산 _

    • [] μ›μ‹œ _ μœ„μΉ˜ _

    • [] raw_support_

  • [] ExtraTreeClassifier

    • [] feature_importances_

  • [] ExtraTreeRegressor

    • [] 클래슀 _ : λ°°μ—΄ ν˜•, λͺ¨μ–‘ (n_ 클래슀,); 고유 ν•œ 클래슀 λ ˆμ΄λΈ”

    • [] feature_importances_

    • [] n_classes_ : int; 고유 ν•œ 클래슀 λ ˆμ΄λΈ” 수

  • [] νŠΉμ§• 응집

    • [] n_ ꡬ성 μš”μ†Œ _

    • [x] 거리 _

  • [] GaussianProcessClassifier

    • [] base_estimator_

    • [x] 컀널 _

  • [x] GaussianRandomProjection

    • [x] ꡬ성 μš”μ†Œ _

  • [] GradientBoostingClassifier

    • [] max_features_

    • [] n_classes_ : int; 고유 ν•œ 클래슀 μˆ˜μž…λ‹ˆλ‹€.

    • [] n_features_ : int; μ‚¬μš© 된 κΈ°λŠ₯의 μˆ˜μž…λ‹ˆλ‹€.

    • [x] oob_improvement_

    • [x] feature_importances_

  • [] GradientBoostingRegressor

    • [] max_features_

    • [] n_classes_ : int; 고유 ν•œ 클래슀 μˆ˜μž…λ‹ˆλ‹€.

    • [] n_estimators_

    • [] n_features_ : int; μ‚¬μš© 된 κΈ°λŠ₯의 μˆ˜μž…λ‹ˆλ‹€.

    • [x] oob_improvement_

    • [x] feature_importances_

  • [] HistGradientBoostingClassifier

    • [] bin_mapper_

    • [ ] 클래슀_

    • [] do_early_stopping_

    • [ ] 손싀_

    • [] n_features_ : int; μ„ νƒν•œ κΈ°λŠ₯의 μˆ˜μž…λ‹ˆλ‹€.

    • [x] n_iter_

    • [] λ“μ μž _

  • [] HistGradientBoostingRegressor

    • [] bin_mapper_

    • [] do_early_stopping_

    • [ ] 손싀_

    • [] n_features_ : int; μ„ νƒν•œ κΈ°λŠ₯의 μˆ˜μž…λ‹ˆλ‹€.

    • []

    • [] λ“μ μž _

  • [] IncrementalPCA

    • [] 배치 _ 크기 _

  • [] IsolationForest

    • [] base_estimator_

    • [] estimators_features_

    • [x] estimators_samples_

    • [] n_features_ : int; μ„ νƒν•œ κΈ°λŠ₯의 μˆ˜μž…λ‹ˆλ‹€.

  • [] KernelCenterer

    • [] K_fit_all_

    • [] K_fit_rows_

  • [] 컀널 밀도

    • [] 트리 _

  • [] LarsCV

    • [] ν™œμ„± _

  • [] μ˜¬κ°€λ―Έ

    • [] dual_gap_

    • [x] 슀파 슀 _ μ½”ν”„ _

  • [] LassoLarsCV

    • [] ν™œμ„± _

  • [] LassoLarsIC

    • [] alphas_

  • [] LatentDirichletAllocation

    • [x] 경계 _

    • [x] doc_topic_prior_

    • [] exp_dirichlet_component_

    • [] random_state_

  • [] LocalOutlierFactor

    • [] 효과적인 _ λ©”νŠΈλ¦­ _

    • [] effective_metric_params_

    • [] n_samples_fit_ : int; ν”ΌνŒ… 된 λ°μ΄ν„°μ˜ μƒ˜ν”Œ μˆ˜μž…λ‹ˆλ‹€.

  • [] MDS

    • [] μœ μ‚¬μ„± _ 맀트릭슀 _

    • [] n_iter_ : int; 반볡 횟수.

  • [] MLPClassifier

    • [] best_loss_

    • [] 손싀 _ 곑선 _

    • [] t_

  • [] MLPRegressor

    • [] best_loss_

    • [] 손싀 _ 곑선 _

    • [] t_

  • [] MiniBatchKMeans

    • [] 카운트 _

    • [] init_size_

    • [] n_iter_ : int; 반볡 횟수.

  • [] MultiTaskElasticNet

    • [] dual_gap_

    • [ ] μ£Όλ‹Ή 순 이읡_

    • [] sparse_coef_

  • [] MultiTaskElasticNetCV

    • [] dual_gap_

  • [] MultiTaskLasso

    • [] dual_gap_

    • [ ] μ£Όλ‹Ή 순 이읡_

    • [] sparse_coef_

  • [] MultiTaskLassoCV

    • [] dual_gap_

  • [] OAS

    • [] μœ„μΉ˜ _

  • [] OneVsRestClassifier

    • [] coef_ : λ°°μ—΄, λͺ¨μ–‘ (1, n_features) λ˜λŠ” (n_classes, n_features); κ²°μ • ν•¨μˆ˜μ˜ κΈ°λŠ₯ κ³„μˆ˜μž…λ‹ˆλ‹€.

    • [] intercept_

    • [] n_classes_ : int; 고유 ν•œ 클래슀 μˆ˜μž…λ‹ˆλ‹€.

  • [] OrthogonalMatchingPursuit

    • [] n_nonzero_coefs_

  • [] PLSCanonical

    • [] coef_ : λ°°μ—΄, λͺ¨μ–‘ (1, n_features) λ˜λŠ” (n_classes, n_features); κ²°μ • ν•¨μˆ˜μ˜ κΈ°λŠ₯ κ³„μˆ˜μž…λ‹ˆλ‹€.

    • [] x_mean_ : 뢀동 ???; 평균

    • [] x_std_

    • [] y_mean_

    • [] y_std_

  • [] PLS νšŒκ·€

    • [] x_mean_

    • [] x_std_

    • [] y_mean_

    • [] y_std_

  • [] PLSSVD

    • [] x_mean_

    • [] x_std_

    • [] y_mean_

    • [] y_std_

  • [] PassiveAggressiveClassifier

    • [] 손싀 _ ν•¨μˆ˜ _

  • [] RBF μƒ˜ν”ŒλŸ¬

    • [] random_offset_

    • [] random_weights_

  • [] ShrunkCovariance

    • [] μˆ˜μΆ•

  • [] SkewedChi2Sampler

    • [] random_offset_

    • [] random_weights_

  • [] _BaseRidgeCV

    • [] μ•ŒνŒŒ _

    • [] coef_

    • [] intercept_

  • [] _ConstantPredictor

    • [] y_

  • [] _RidgeGCV

    • [] μ•ŒνŒŒ _

    • [] coef_

    • [] dual_coef_

    • [] intercept_

ExtraTreeRegressor λ¬Έμ„œμ— feature_importances_ 을 μΆ”κ°€ν•˜κ² μŠ΅λ‹ˆλ‹€.

데이터 κ³Όν•™ 전곡 κ·Έλ£Ήκ³Ό μ €λŠ” BayesianRidge, [X_offset_, X_scale_] 속성 λ¬Έμ„œ μž‘μ—…μ„ μ‹œμž‘ν•  κ²ƒμž…λ‹ˆλ‹€.

μ•ˆλ…•ν•˜μ„Έμš”, 저희 기고자 그룹은 λ‹€μŒ μž‘μ—…μ„ μˆ˜ν–‰ ν•  κ²ƒμž…λ‹ˆλ‹€.

  • PLSSVD
  • CCA
  • 증뢄 PCA
  • MiniBatchKMeans
  • μ˜¬κ°€λ―Έ

# 16826의 잠재적 μˆ˜μ • 사항

ν…ŒμŠ€νŠΈλŠ” # 16286에 μΆ”κ°€λ˜μ—ˆμŠ΅λ‹ˆλ‹€.
ν˜„μž¬ κ±΄λ„ˆ λ›°λŠ” λͺ‡ 가지 ν΄λž˜μŠ€κ°€ μžˆμŠ΅λ‹ˆλ‹€.
https://github.com/scikit-learn/scikit-learn/blob/753da1de06a764f264c3f5f4817c9190dbe5e021/sklearn/tests/test_docstring_parameters.py#L180

이듀 쀑 μΌλΆ€μ—λŠ” 이미 PR이 μžˆμœΌλ―€λ‘œ μž‘μ—…μ„ μ‹œμž‘ν•˜κΈ° 전에 ν™•μΈν•΄μ•Όν•©λ‹ˆλ‹€.

이듀 쀑 μΌλΆ€μ—λŠ” 이미 PR이 μžˆμœΌλ―€λ‘œ μž‘μ—…μ„ μ‹œμž‘ν•˜κΈ° 전에 ν™•μΈν•΄μ•Όν•©λ‹ˆλ‹€.

쒋은 μ˜΅μ…˜μ€ λ³‘ν•©λ˜μ§€ μ•Šμ€ 곡개 PR을 ν™•μΈν•˜κ³  μ™„λ£Œν•˜λŠ” κ²ƒμž…λ‹ˆλ‹€.

일반적으둜 PR이 2-3 μ£Ό 이상 ν™œλ™μ„ν•˜μ§€ μ•Šμ•˜λ‹€λ©΄μ΄λ₯Ό 이어 λ°›μ•„ 끝내렀고 λ…Έλ ₯ν•˜λŠ” 것이 μ’‹μŠ΅λ‹ˆλ‹€.

μ΄λŸ¬ν•œ μ†”λ£¨μ…˜μ— κ΄€μ‹¬μ΄μžˆλŠ” 경우 맀개 λ³€μˆ˜κ°€ λͺ¨λ‘ λ¬Έμ„œν™”λ˜μ—ˆλŠ”μ§€ μ—¬λΆ€λ₯Ό ν™•μΈν•˜λŠ” μŠ€ν•‘ν¬μŠ€ ν™•μž₯을 κ΅¬ν˜„ν•˜λŠ” 방법이 μžˆμŠ΅λ‹ˆλ‹€ (예 : https://github.com/sdpython/pyquickhelper/blob μ°Έμ‘°). /master/src/pyquickhelper/sphinxext/sphinx_docassert_extension.py). scikit-learn λ¬Έμ„œμ— μ‚¬μš©μž 지정 λ¬Έμ„œλ₯Ό μΆ”κ°€ν•˜λŠ” 것이 유용 ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

@sdpython ,

ν₯λ―Έ λ‘­κ΅°μš”!

IIRCμ—λŠ” λͺ¨λ“  속성이 λ¬Έμ„œν™”λ˜μ—ˆλŠ”μ§€ ν™•μΈν•˜λŠ” 곡톡 ν…ŒμŠ€νŠΈκ°€ μžˆμŠ΅λ‹ˆλ‹€. https://github.com/scikit-learn/scikit-learn/pull/16286 에 μΆ”κ°€λ˜μ—ˆμŠ΅λ‹ˆλ‹€

μ–΄λ–€ μ ‘κ·Ό 방식이 더 λ°”λžŒμ§ν•œ 지에 λŒ€ν•œ 정보에 μž…κ° ν•œ μ˜κ²¬μ€ μ—†μ§€λ§Œ λˆ„λ½ 된 맀개 λ³€μˆ˜λ₯Ό λ¬Έμ„œν™”ν•˜λŠ” 것이 검사 방법을 κ²°μ •ν•˜λŠ” 것보닀 μš°μ„  μˆœμœ„κ°€ 더 높을 κ²ƒμž…λ‹ˆλ‹€.

μŠ€ν•‘ν¬μŠ€μ—μ„œ κ·Έλ ‡κ²Œ ν•  λ•Œμ˜ λ¬Έμ œλŠ” 우리의 경우 λ¬Έμ„œλ₯Ό μž‘μ„±ν•˜λŠ” 데 였랜 μ‹œκ°„μ΄ κ±Έλ¦¬λ―€λ‘œ (λͺ¨λ“  예제λ₯Ό μƒμ„±ν•˜κΈ° λ•Œλ¬Έμ—) λ‹¨μœ„ ν…ŒμŠ€νŠΈ λ˜λŠ” 독립 μ‹€ν–‰ ν˜• 도ꡬλ₯Ό μ‚¬μš©ν•˜κΈ°κ°€ 더 μ‰½μŠ΅λ‹ˆλ‹€. 이전에 https://github.com/scikit-learn/scikit-learn/issues/15440μ—μ„œ numpydoc μœ νš¨μ„± 검사 λ₯Ό μ‚¬μš© 검사 λŠ” https://github.com/terrencepreillyμ—μ„œ μˆ˜ν–‰ ν•  수 μžˆμŠ΅λ‹ˆλ‹€. / darglint. λ”°λΌμ„œ 독 μŠ€νŠΈλ§μ— λŒ€ν•΄ 5 가지 λ‹€λ₯Έ μœ νš¨μ„± 검사 도ꡬλ₯Ό μ‚¬μš©ν•˜λŠ” 상황도 ν”Όν•΄μ•Όν•©λ‹ˆλ‹€. :)

예λ₯Ό λ“€μ–΄ pytestλ₯Ό μ‚¬μš©ν•˜μ—¬ κ²°κ³Όλ₯Ό ν™•μΈν•˜λŠ” κΈ°λŠ₯이 λ§ˆμŒμ— λ“­λ‹ˆλ‹€.

pytest -v  --runxfail -k IsolationForest sklearn/tests/test_docstring_parameters.py

λ”°λΌμ„œ μŠ€ν•‘ν¬μŠ€ λΉŒλ“œλ₯Ό λ³€κ²½ν•  ν•„μš”κ°€ 없을 μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€.

μ–΄λ–€ 속성 독 슀트링이 아직 λˆ„λ½λ˜μ—ˆλŠ”μ§€ ν™•μΈν–ˆμŠ΅λ‹ˆλ‹€ (μœ„μ˜ λͺ©λ‘μ€ μ˜€λž˜λ˜μ—ˆμŠ΅λ‹ˆλ‹€). λ‹€μŒμ€ λ‚΄κ°€ 찾은 κ²ƒμž…λ‹ˆλ‹€.

BayesianGaussianMixture, [mean_precision_prior]
BayesianRidge, [X_offset_, X_scale_]
BernoulliNB, [coef_, intercept_]
μžμž‘ λ‚˜λ¬΄, [fit_, partial_fit_]
CCA, [x_mean_, x_std_, y_mean_, y_std_]
DecisionTreeRegressor, [classes_, n_classes_]
DummyClassifier, [output_2d_]
DummyRegressor, [output_2d_]
ElasticNet, [dual_gap_]
ElasticNetCV, [dual_gap_]
ExtraTreeRegressor, [classes_, n_classes_]
FeatureAgglomeration, [n_components_]
LarsCV, [ν™œμ„± _]
μ˜¬κ°€λ―Έ, [dual_gap_]
LassoLarsCV, [ν™œμ„± _]
LassoLarsIC, [alphas_]
MiniBatchKMeans, [counts_, init_size_, n_iter_]
MultiTaskElasticNet, [dual_gap_, eps_, sparse_coef_]
MultiTaskElasticNetCV, [dual_gap_]
MultiTaskLasso, [dual_gap_, eps_, sparse_coef_]
MultiTaskLassoCV, [dual_gap_]
NuSVR, [probA_, probB_]
OneClassSVM, [probA_, probB_]
OneVsRestClassifier, [coef_, intercept_]
OrthogonalMatchingPursuit, [n_nonzero_coefs_]
PLSCanonical, [x_mean_, x_std_, y_mean_, y_std_]
PLSSVD, [x_mean_, x_std_, y_mean_, y_std_]
SVR, [probA_, probB_]

@marenwestermann κ°μ‚¬ν•©λ‹ˆλ‹€!

MiniBatchKMeans μž‘μ—… μ€‘μž…λ‹ˆλ‹€.

μ˜¬κ°€λ―Έ μž‘μ—… μ€‘μž…λ‹ˆλ‹€.

이제 MultiTaskElasticNet 및 MultiTaskLasso에 sparse_coef_ 속성을 μΆ”κ°€ν•˜λŠ” μ€‘μž…λ‹ˆλ‹€.

μ €λŠ” LarsCVμ—μ„œ μΌν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€.

@thomasjpfan SVR 및 OneClassSVM ν΄λž˜μŠ€μ—μ„œ λ§ν•©λ‹ˆλ‹€.
"probA_ 속성은 버전 0.23μ—μ„œ 더 이상 μ‚¬μš©λ˜μ§€ μ•ŠμœΌλ©° 버전 0.25μ—μ„œ μ œκ±°λ©λ‹ˆλ‹€." κ³Ό
"probB_ 속성은 버전 0.23μ—μ„œ 더 이상 μ‚¬μš©λ˜μ§€ μ•ŠμœΌλ©° 버전 0.25μ—μ„œ μ œκ±°λ©λ‹ˆλ‹€."

λ”°λΌμ„œ μ΄λŸ¬ν•œ 속성은 더 이상 λ¬Έμ„œκ°€ ν•„μš”ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€.
μ—¬κΈ°μ—μ„œμ΄ 두 속성은 NuSVR ν΄λž˜μŠ€μ—μ„œλ„ 더 이상 μ‚¬μš©λ˜μ§€ μ•ŠμŠ΅λ‹ˆκΉŒ?

ExtraTreeRegressor에 λŒ€ν•œ classes_ 및 n_classes_ 속성은 였 νƒμ§€μž…λ‹ˆλ‹€.

λ”°λΌμ„œ μ΄λŸ¬ν•œ 속성은 더 이상 λ¬Έμ„œκ°€ ν•„μš”ν•˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€.
μ—¬κΈ°μ„œλΆ€ν„°μ΄ 두 속성은 NuSVR ν΄λž˜μŠ€μ—μ„œλ„ 더 이상 μ‚¬μš©λ˜μ§€ μ•ŠμŠ΅λ‹ˆκΉŒ?

μš°λ¦¬λŠ” 그것듀을 더 이상 μ‚¬μš©ν•˜μ§€ μ•ŠκΈ° λ•Œλ¬Έμ— λ¬Έμ„œν™” ν•  ν•„μš”κ°€ μ—†λ‹€κ³  말할 κ²ƒμž…λ‹ˆλ‹€.

ExtraTreeRegressor에 λŒ€ν•œ classes_ 및 n_classes_ 속성은 거짓 κΈμ •μž…λ‹ˆλ‹€.

예, 그것듀은 더 이상 μ‚¬μš©λ˜μ§€ μ•Šκ³  아직 μ œκ±°λ˜μ§€ μ•Šμ€ 경우 μ œκ±°ν•΄μ•Όν•©λ‹ˆλ‹€.

DecisionTreeRegressor μˆ˜μ—… λ‚΄μš© :
"n_classes_ 속성은 버전 0.22μ—μ„œ 더 이상 μ‚¬μš©λ˜μ§€ μ•ŠμœΌλ©° 0.24μ—μ„œ μ œκ±°λ©λ‹ˆλ‹€."
"classes_ 속성은 버전 0.22μ—μ„œ 더 이상 μ‚¬μš©λ˜μ§€ μ•ŠμœΌλ©° 0.24μ—μ„œ μ œκ±°λ©λ‹ˆλ‹€."

λ”°λΌμ„œ μ΄λŸ¬ν•œ 속성도 λ¬Έμ„œν™”κ°€ ν•„μš”ν•˜μ§€ μ•ŠμŠ΅λ‹ˆκΉŒ?

λ”°λΌμ„œ μ΄λŸ¬ν•œ 속성도 λ¬Έμ„œν™”κ°€ ν•„μš”ν•˜μ§€ μ•ŠμŠ΅λ‹ˆκΉŒ?

였λ₯Έμͺ½ @Abilityguy , 지적 ν•΄ μ£Όμ…”μ„œ κ°μ‚¬ν•©λ‹ˆλ‹€.

_RidgeGCVμ—μ„œ μ•„λž˜ 뢈일치λ₯Ό λ³Ό 수 μžˆμŠ΅λ‹ˆλ‹€.
독 슀트링 였λ₯˜ : _RidgeGCV의 속성 뢈일치
alpha_
졜고 점수_
coef_
dual_coef_
intercept_
n_features_in_

그리고 _BaseRidgeCVμ—μ„œ :
독 슀트링 였λ₯˜ : _BaseRidgeCV의 속성 뢈일치
alpha_
졜고 점수_
coef_
intercept_
n_features_in_

λ‚΄κ°€ 받아도 λ κΉŒμš”? λ‚˜λŠ” 첫 번째 타이머이며 κΈ°μ—¬ν•˜κ³  μ‹ΆμŠ΅λ‹ˆλ‹€.

@marenwestermann 클래슀 FeatureAgglomerationμ—μ„œ, 버전 0.21μ—μ„œ n_components_λ₯Ό λŒ€μ²΄ν•˜κΈ° μœ„ν•΄ n_connected_components_κ°€ μΆ”κ°€λ˜μ—ˆλŠ”λ°, n_components_κ°€ 거짓 양성일 κ²ƒμž…λ‹ˆλ‹€ ..?

λ‚΄ μ΄ν•΄μ—μ„œ @ srivathsa729 예. κ·ΈλŸ¬λ‚˜ 핡심 개발자 쀑 ν•œ λͺ…이 λ‹€μ‹œ 확인할 수 μžˆλ‹€λ©΄ 쒋을 κ²ƒμž…λ‹ˆλ‹€.

ElasticNet을 μ‚¬μš©ν•˜κ² μŠ΅λ‹ˆλ‹€.

BayesianRidge에 λŒ€ν•œ X_offset_ 및 X_scale_ 속성에 λŒ€ν•œ λ¬Έμ„œκ°€ # 18607κ³Ό ν•¨κ»˜ μΆ”κ°€λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

output_2d_ 속성은 DummyClassifier 및 DummyRegressorμ—μ„œ 더 이상 μ‚¬μš©λ˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€ (# 14933 μ°Έμ‘°).

이 PR μƒλ‹¨μ—μ„œ @amueller κ°€ 제곡 ν•œ 슀크립트λ₯Ό μ‹€ν–‰ν–ˆμŠ΅λ‹ˆλ‹€ (일이 움직이기 λ•Œλ¬Έμ— μ½”λ“œλ₯Ό μ•½κ°„ μˆ˜μ •ν•΄μ•Ό 함). # 16112에 λ„μž… 된 n_features_in_ λ₯Ό μ œμ™Έν•˜κ³  λ¬Έμ„œν™”ν•΄μ•Ό ν•  속성을 더 이상 찾을 수 μ—†μŠ΅λ‹ˆλ‹€. 이 속성은 μ†Œκ°œ 된 λͺ¨λ“  ν΄λž˜μŠ€μ—μ„œ λ¬Έμ„œν™”λ˜μ§€ μ•Šμ•˜μŠ΅λ‹ˆλ‹€. λ¬Έμ„œν™”ν•΄μ•Όν•©λ‹ˆκΉŒ?
ν•‘ @NicolasHug

이 νŽ˜μ΄μ§€κ°€ 도움이 λ˜μ—ˆλ‚˜μš”?
0 / 5 - 0 λ“±κΈ‰