immuneML.ml_metrics package

Submodules

immuneML.ml_metrics.Metric module

class immuneML.ml_metrics.Metric.Metric(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]

Bases: Enum

ACCURACY = 'accuracy_score'
AUC = 'roc_auc_score'
BALANCED_ACCURACY = 'balanced_accuracy_score'
CONFUSION_MATRIX = 'confusion_matrix'
F1_MACRO = 'f1_score_macro'
F1_MICRO = 'f1_score_micro'
F1_WEIGHTED = 'f1_score_weighted'
LOG_LOSS = 'log_loss'
PRECISION = 'precision_score'
RECALL = 'recall_score'
static get_metric(metric_name: str)[source]
static get_probability_based_metric_types()[source]
static get_search_criterion(metric)[source]
static get_sklearn_score_name(metric)[source]

immuneML.ml_metrics.MetricUtil module

class immuneML.ml_metrics.MetricUtil.MetricUtil[source]

Bases: object

static get_metric_fn(metric: Metric)[source]
static score_for_metric(metric: Metric, predicted_y, predicted_proba_y, true_y, classes)[source]

Note: when providing label classes, make sure the ‘positive class’ is sorted last. This sorting should be done automatically when accessing Label.values

immuneML.ml_metrics.ml_metrics module

immuneML.ml_metrics.ml_metrics.f1_score_macro(true_y, predicted_y)[source]
immuneML.ml_metrics.ml_metrics.f1_score_micro(true_y, predicted_y)[source]
immuneML.ml_metrics.ml_metrics.f1_score_weighted(true_y, predicted_y)[source]
immuneML.ml_metrics.ml_metrics.roc_auc_score(true_y, predicted_y, labels=None)[source]

Module contents