classification#

cleanlab can be used for multiclass (or multi-label) learning with noisy labels for any dataset and model.

The CleanLearning class wraps an instance of an sklearn classifier. The wrapped classifier must adhere to the sklearn estimator API, meaning it must define four functions:

  • clf.fit(X, y, sample_weight=None)

  • clf.predict_proba(X)

  • clf.predict(X)

  • clf.score(X, y, sample_weight=None)

where X contains data, y contains labels (with elements in 0, 1, …, K-1, where K is the number of classes), and sample_weight re-weights examples in the loss function while training.

Furthermore, the estimator should be correctly clonable via sklearn.base.clone: cleanlab internally creates multiple instances of the estimator, and if you e.g. manually wrap a PyTorch model, you must ensure that every call to the estimator’s __init__() creates an independent instance of the model.

Note

There are two new notions of confidence in this package:

1. Confident examples — examples we are confident are labeled correctly. We prune everything else. Mathematically, this means keeping the examples with high probability of belong to their provided label class.

2. Confident errors — examples we are confident are labeled erroneously. We prune these. Mathematically, this means pruning the examples with high probability of belong to a different class.

Examples

>>> from cleanlab.classification import CleanLearning
>>> from sklearn.linear_model import LogisticRegression as LogReg
>>> cl = CleanLearning(clf=LogReg()) # Pass in any classifier.
>>> cl.fit(X_train, labels_maybe_with_errors)
>>> # Estimate the predictions as if you had trained without label issues.
>>> pred = cl.predict(X_test)

If the model is not sklearn-compatible by default, it might be the case that standard packages can adapt the model. For example, you can adapt PyTorch models using skorch and adapt Keras models using SciKeras.

If an open-source adapter doesn’t already exist, you can manually wrap the model to be sklearn-compatible. This is made easy by inheriting from sklearn.base.BaseEstimator:

from sklearn.base import BaseEstimator

class YourModel(BaseEstimator):
    def __init__(self, ):
        pass
    def fit(self, X, y, sample_weight=None):
        pass
    def predict(self, X):
        pass
    def predict_proba(self, X):
        pass
    def score(self, X, y, sample_weight=None):
        pass

Note

  • labels refers to the given labels in the original dataset, which may have errors

  • labels must be integers in 0, 1, …, K-1, where K is the total number of classes

Note

Confident learning is the state-of-the-art (Northcutt et al., 2021) for weak supervision, finding label issues in datasets, learning with noisy labels, uncertainty estimation, and more. It works with any classifier, including deep neural networks. See the clf parameter.

Confident learning is a subfield of theory and algorithms of machine learning with noisy labels. Cleanlab achieves state-of-the-art performance of any open-sourced implementation of confident learning across a variety of tasks like multi-class classification, multi-label classification, and PU learning.

Given any classifier having the predict_proba method, an input feature matrix X, and a discrete vector of noisy labels labels, confident learning estimates the classifications that would be obtained if the true labels had instead been provided to the classifier during training. labels denotes the noisy labels instead of the y~\tilde{y} used in confident learning paper.

Classes:

CleanLearning([clf, seed, cv_n_folds, ...])

CleanLearning = Machine Learning with cleaned data (even when training on messy, error-ridden data).

class cleanlab.classification.CleanLearning(clf=None, *, seed=None, cv_n_folds=5, converge_latent_estimates=False, pulearning=None, find_label_issues_kwargs={}, label_quality_scores_kwargs={}, verbose=False)[source]#

Bases: sklearn.base.BaseEstimator

CleanLearning = Machine Learning with cleaned data (even when training on messy, error-ridden data).

Automated and robust learning with noisy labels using any dataset and any model. This class trains a model clf with error-prone, noisy labels as if the model had been instead trained on a dataset with perfect labels. It achieves this by cleaning out the error and providing cleaned data while training.

Parameters
  • clf (estimator instance, optional) –

    A classifier implementing the sklearn estimator API, defining the following functions:

    • clf.fit(X, y, sample_weight=None)

    • clf.predict_proba(X)

    • clf.predict(X)

    • clf.score(X, y, sample_weight=None)

    See cleanlab.experimental for examples of sklearn wrappers, e.g. around PyTorch and FastText.

    If the model is not sklearn-compatible by default, it might be the case that standard packages can adapt the model. For example, you can adapt PyTorch models using skorch and adapt Keras models using SciKeras.

    Stores the classifier used in Confident Learning. Default classifier used is sklearn.linear_model.LogisticRegression.

  • seed (int, optional) – Set the default state of the random number generator used to split the cross-validated folds. By default, uses np.random current random state.

  • cv_n_folds (int, default 5) – This class needs holdout predicted probabilities for every data example and if not provided, uses cross-validation to compute them. cv_n_folds sets the number of cross-validation folds used to compute out-of-sample probabilities for each example in X.

  • converge_latent_estimates (bool, optional) – If true, forces numerical consistency of latent estimates. Each is estimated independently, but they are related mathematically with closed form equivalences. This will iteratively enforce consistency.

  • pulearning ({None, 0, 1}, default None) – Only works for 2 class datasets. Set to the integer of the class that is perfectly labeled (you are certain that there are no errors in that class).

  • find_label_issues_kwargs (dict, optional) – Keyword arguments to pass into filter.find_label_issues. Options that may especially impact accuracy include: filter_by, frac_noise, min_examples_per_class.

  • label_quality_scores_kwargs (dict, optional) – Keyword arguments to pass into rank.get_label_quality_scores. Options include: method, adjust_pred_probs.

  • verbose (bool, default True) – Controls how much output is printed. Set to False to suppress print statements.

Methods:

find_label_issues([X, labels, pred_probs, ...])

Identifies potential label issues in the dataset using confident learning.

fit(X, labels, *[, pred_probs, thresholds, ...])

Train the model clf with error-prone, noisy labels as if the model had been instead trained on a dataset with the correct labels.

get_label_issues()

Accessor.

get_params([deep])

Get parameters for this estimator.

predict(*args, **kwargs)

Returns a vector of predictions.

predict_proba(*args, **kwargs)

Returns a vector of predicted probabilities for each example in X, P(true label=k).

save_space()

Clears non-sklearn attributes of this estimator to save space (in-place).

score(X, y[, sample_weight])

Returns the clf's score on a test set X with labels y.

set_params(**params)

Set the parameters of this estimator.

find_label_issues(X=None, labels=None, *, pred_probs=None, thresholds=None, noise_matrix=None, inverse_noise_matrix=None, save_space=False, clf_kwargs={})[source]#

Identifies potential label issues in the dataset using confident learning.

Runs cross-validation to get out-of-sample pred_probs from clf and then calls filter.find_label_issues to find label issues. These label issues are cached internally and returned in a pandas DataFrame. Kwargs for filter.find_label_issues must have already been specified in the initialization of this class, not here.

Unlike filter.find_label_issues, which requires pred_probs, this method only requires a classifier and it can do the cross-validation for you. Both methods return the same boolean mask that identifies which examples have label issues. This is the preferred method to use if you plan to subsequently invoke: CleanLearning.fit().

Note: this method computes the label issues from scratch. To access previously-computed label issues from this CleanLearning instance, use the get_label_issues method.

This is the method called to find label issues inside CleanLearning.fit() and they share mostly the same parameters.

Parameters

save_space (bool, optional) – If True, then returned label_issues_df will not be stored as attribute. This means some other methods like self.get_label_issues() will no longer work.

For info about the other parameters, see the docstring of CleanLearning.fit().

Returns

pandas DataFrame of label issues for each example. Unless save_space argument is specified, same DataFrame is also stored as self.label_issues_df attribute accessible via get_label_issues. Each row represents an example from our dataset and the DataFrame may contain the following columns:

  • is_label_issue: boolean mask for the entire dataset where True represents a label issue and False represents an example that is accurately labeled with high confidence. This column is equivalent to label_issues_mask output from filter.find_label_issues.

  • label_quality: Numeric score that measures the quality of each label (how likely it is to be correct, with lower scores indicating potentially erroneous labels).

  • given_label: Integer indices corresponding to the class label originally given for this example (same as labels input). Included here for ease of comparison against clf predictions, only present if “predicted_label” column is present.

  • predicted_label: Integer indices corresponding to the class predicted by trained clf model. Only present if pred_probs were provided as input or computed during label-issue-finding.

  • sample_weight: Numeric values used to weight examples during the final training of clf in CleanLearning.fit(). This column not be present after self.find_label_issues() but may be added after call to CleanLearning.fit(). For more precise definition of sample weights, see documentation of CleanLearning.fit()

Return type

pd.DataFrame

fit(X, labels, *, pred_probs=None, thresholds=None, noise_matrix=None, inverse_noise_matrix=None, label_issues=None, sample_weight=None, clf_kwargs={}, clf_final_kwargs={})[source]#

Train the model clf with error-prone, noisy labels as if the model had been instead trained on a dataset with the correct labels. fit achieves this by first training clf via cross-validation on the noisy data, using the resulting predicted probabilities to identify label issues, pruning the data with label issues, and finally training clf on the remaining clean data.

Parameters
  • X (np.array) – Input feature matrix of shape (N, ...), where N is the number of examples. The classifier that this instance was initialized with, clf, must be able to handle data with this shape.

  • labels (np.array) – An array of shape (N,) of noisy labels, i.e. some labels may be erroneous. Elements must be in the set 0, 1, …, K-1, where K is the number of classes.

  • pred_probs (np.array, optional) –

    An array of shape (N, K) of model-predicted probabilities, P(label=k|x). Each row of this matrix corresponds to an example x and contains the model-predicted probabilities that x belongs to each possible class, for each of the K classes. The columns must be ordered such that these probabilities correspond to class 0, 1, …, K-1. pred_probs should have been computed using 3 (or higher) fold cross-validation.

    Note

    If you are not sure, leave pred_probs=None (the default) and it will be computed for you using cross-validation with the model.

  • thresholds (array_like, optional) –

    An array of shape (K, 1) or (K,) of per-class threshold probabilities, used to determine the cutoff probability necessary to consider an example as a given class label (see Northcutt et al., 2021, Section 3.1, Equation 2).

    This is for advanced users only. If not specified, these are computed for you automatically. If an example has a predicted probability greater than this threshold, it is counted as having true_label = k. This is not used for pruning/filtering, only for estimating the noise rates using confident counts.

  • noise_matrix (np.array, optional) – An array of shape (N, N) representing the conditional probability matrix P(label=k_s | true label=k_y), the fraction of examples in every class, labeled as every other class. Assumes columns of noise_matrix sum to 1.

  • inverse_noise_matrix (np.array, optional) – An array of shape (N, N) representing the conditional probability matrix P(true label=k_y | label=k_s), the estimated fraction observed examples in each class k_s that are mislabeled examples from every other class k_y, Assumes columns of inverse_noise_matrix sum to 1.

  • label_issues (pd.DataFrame or np.array, optional) –

    Specifies the label issues for each example in dataset. If pd.DataFrame, must be formatted as the one returned by: CleanLearning.find_label_issues or CleanLearning.get_label_issues. If np.array, must contain either boolean label_issues_mask as output by: default filter.find_label_issues, or integer indices as output by filter.find_label_issues with its return_indices_ranked_by argument specified. Providing this argument significantly reduces the time this method takes to run by skipping the slow cross-validation step necessary to find label issues. Examples identified to have label issues will be pruned from the data before training the final clf model.

    Caution: If you provide label_issues without having previously called self.find_label_issues, e.g. as a np.array, then some functionality like training with sample weights may be disabled.

  • sample_weight (array-like of shape (n_samples,), optional) – Array of weights that are assigned to individual samples. If not provided, samples may still be weighted by the estimated noise in the class they are labeled as.

  • clf_kwargs (dict, optional) – Optional keyword arguments to pass into clf’s fit() method.

  • clf_final_kwargs (dict, optional) – Optional extra keyword arguments to pass into the final clf fit() on the cleaned data but not the clf fit() in each fold of cross-validation on the noisy data. The final fit() will also receive clf_kwargs, but these may be overwritten by values in clf_final_kwargs. This can be useful for training differently in the final fit() than during cross-validation.

Returns

self - Fitted estimator that has all the same methods as any sklearn estimator.

Return type

CleanLearning

After calling self.fit(), this estimator also stores a few extra useful attributes, in particular self.label_issues_df: a pd.DataFrame accessible via get_label_issues of similar format as the one returned by: CleanLearning.find_label_issues. See documentation of CleanLearning.find_label_issues for column descriptions. After calling self.fit(), self.label_issues_df may also contain an extra column:

  • sample_weight: Numeric values that were used to weight examples during the final training of clf in CleanLearning.fit(). sample_weight column will only be present if automatic sample weights were actually used. These automatic weights are assigned to each example based on the class it belongs to, i.e. there are only num_classes unique sample_weight values. The sample weight for an example belonging to class k is computed as 1 / p(given_label = k | true_label = k). This sample_weight normalizes the loss to effectively trick clf into learning with the distribution of the true labels by accounting for the noisy data pruned out prior to training on cleaned data. In other words, examples with label issues were removed, so this weights the data proportionally so that the classifier trains as if it had all the true labels, not just the subset of cleaned data left after pruning out the label issues.

get_label_issues()[source]#

Accessor. Returns label_issues_df attribute if previously already computed. This pd.DataFrame describes the label issues identified for each example (each row corresponds to an example). For column definitions, see the documentation of CleanLearning.find_label_issues.

Return type

pd.DataFrame

get_params(deep=True)#

Get parameters for this estimator.

Parameters

deep (bool, default True) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params – Parameter names mapped to their values.

Return type

dict

predict(*args, **kwargs)[source]#

Returns a vector of predictions.

Parameters

X (np.array) – An array of shape (N, ...) of test data.

predict_proba(*args, **kwargs)[source]#

Returns a vector of predicted probabilities for each example in X, P(true label=k).

Parameters

X (np.array) – An array of shape (N, ...) of test data.

save_space()[source]#

Clears non-sklearn attributes of this estimator to save space (in-place). This includes the DataFrame attribute that stored label issues which may be large for big datasets. You may want to call this method before deploying this model (i.e. if you just care about producing predictions). After calling this method, certain non-prediction-related attributes/functionality will no longer be available (e.g. you cannot call self.fit() anymore).

score(X, y, sample_weight=None)[source]#

Returns the clf’s score on a test set X with labels y. Uses the model’s default scoring function.

Parameters
  • X (np.array) – An array of shape (N, ...) of test data.

  • y (np.array) – An array of shape (N,) or (N, 1) of test labels.

  • sample_weight (np.array, optional) – An array of shape (N,) or (N, 1) used to weight each example when computing the score.

set_params(**params)#

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters

**params (dict) – Estimator parameters.

Returns

self – Estimator instance.

Return type

estimator instance