immuneML.ml_metrics package

Submodules

immuneML.ml_metrics.ClassificationMetric module

class immuneML.ml_metrics.ClassificationMetric.ClassificationMetric(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: Enum

ACCURACY = 'accuracy_score'
AUC = 'roc_auc_score'
AUC_OVO = 'roc_auc_score_ovo'
AUC_OVR = 'roc_auc_score_ovr'
AVERAGE_PRECISION = 'average_precision_score'
BALANCED_ACCURACY = 'balanced_accuracy_score'
BRIER_SCORE = 'brier_score_loss'
CONFUSION_MATRIX = 'confusion_matrix'
F1_MACRO = 'f1_score_macro'
F1_MICRO = 'f1_score_micro'
F1_WEIGHTED = 'f1_score_weighted'
LOG_LOSS = 'log_loss'
PRECISION = 'precision_score'
PRECISION_MACRO = 'precision_score_macro'
PRECISION_MICRO = 'precision_score_micro'
PRECISION_WEIGHTED = 'precision_score_weighted'
RECALL = 'recall_score'
RECALL_MACRO = 'recall_score_macro'
RECALL_MICRO = 'recall_score_micro'
RECALL_WEIGHTED = 'recall_score_weighted'
static get_binary_only_metrics()[source]

Metrics that required binarized labels

static get_metric(metric_name: str)[source]
static get_probability_based_metric_types()[source]
static get_search_criterion(metric)[source]
static get_sklearn_score_name(metric)[source]

immuneML.ml_metrics.ClusteringMetric module

immuneML.ml_metrics.ClusteringMetric.is_external(metric: str)[source]
immuneML.ml_metrics.ClusteringMetric.is_internal(metric: str)[source]
immuneML.ml_metrics.ClusteringMetric.is_valid_metric(metric: str)[source]

immuneML.ml_metrics.MetricUtil module

class immuneML.ml_metrics.MetricUtil.MetricUtil[source]

Bases: object

static get_metric_fn(metric: ClassificationMetric)[source]
static score_for_metric(metric: ClassificationMetric, predicted_y, predicted_proba_y, true_y, classes, pos_class=None)[source]

Note: when providing label classes, make sure the ‘positive class’ is sorted last. This sorting should be done automatically when accessing Label.values

immuneML.ml_metrics.ml_metrics module

immuneML.ml_metrics.ml_metrics.brier_score_loss(true_y, predicted_y, sample_weight=None, labels=None)[source]
immuneML.ml_metrics.ml_metrics.f1_score_macro(true_y, predicted_y, sample_weight=None)[source]
immuneML.ml_metrics.ml_metrics.f1_score_micro(true_y, predicted_y, sample_weight=None)[source]
immuneML.ml_metrics.ml_metrics.f1_score_weighted(true_y, predicted_y, sample_weight=None)[source]
immuneML.ml_metrics.ml_metrics.precision_score_macro(true_y, predicted_y, sample_weight=None, labels=None)[source]
immuneML.ml_metrics.ml_metrics.precision_score_micro(true_y, predicted_y, sample_weight=None, labels=None)[source]
immuneML.ml_metrics.ml_metrics.precision_score_weighted(true_y, predicted_y, sample_weight=None, labels=None)[source]
immuneML.ml_metrics.ml_metrics.recall_score_macro(true_y, predicted_y, sample_weight=None, labels=None)[source]
immuneML.ml_metrics.ml_metrics.recall_score_micro(true_y, predicted_y, sample_weight=None, labels=None)[source]
immuneML.ml_metrics.ml_metrics.recall_score_weighted(true_y, predicted_y, sample_weight=None, labels=None)[source]
immuneML.ml_metrics.ml_metrics.roc_auc_score(true_y, predicted_y, sample_weight=None, labels=None, multiclass: str = 'raise')[source]
immuneML.ml_metrics.ml_metrics.roc_auc_score_ovo(true_y, predicted_y, sample_weight=None, labels=None)[source]
immuneML.ml_metrics.ml_metrics.roc_auc_score_ovr(true_y, predicted_y, sample_weight=None, labels=None)[source]

Module contents