immuneML.hyperparameter_optimization.strategy package

Submodules

immuneML.hyperparameter_optimization.strategy.GridSearch module

class immuneML.hyperparameter_optimization.strategy.GridSearch.GridSearch(hp_settings: list, search_criterion=<built-in function max>)[source]

Bases: immuneML.hyperparameter_optimization.strategy.HPOptimizationStrategy.HPOptimizationStrategy

clone()[source]
generate_next_setting(hp_setting: Optional[immuneML.hyperparameter_optimization.HPSetting.HPSetting] = None, metric: Optional[float] = None)immuneML.hyperparameter_optimization.HPSetting.HPSetting[source]

generator function which returns the next hyper-parameter setting to be evaluated :param hp_setting: previous setting (None if it’s the first iteration) :param metric: performance metric from the previous setting per label :return: new hp_setting or None (if the end is reached)

get_all_hps()immuneML.hyperparameter_optimization.HPSettingResult.HPSettingResult[source]
get_optimal_hps()immuneML.hyperparameter_optimization.HPSetting.HPSetting[source]

Finds the optimal hyperparameter setting, where the optimal is the one with max/min value of the search metric. The search criterion (object attribute) defines if it should be max (its value is max function) or min (its value is min function). max corresponds to metrics such as accuracy, AUC, while min corresponds to metrics such as log loss.

Returns

HPSetting object which had the optimal performance based on the metric value in the search space

get_performance(hp_setting: immuneML.hyperparameter_optimization.HPSetting.HPSetting)[source]

immuneML.hyperparameter_optimization.strategy.HPOptimizationStrategy module

class immuneML.hyperparameter_optimization.strategy.HPOptimizationStrategy.HPOptimizationStrategy(hp_settings: list, search_criterion=<built-in function max>)[source]

Bases: object

hyper-parameter optimization strategy is a base class of all different hyper-parameter optimization approaches, such as grid search, random search, bayesian optimization etc.

HPOptimizationStrategy internally keeps a dict of settings that were tried out and the metric value that was obtained on the validation set which it then uses to determine the next step

abstract clone()[source]
abstract generate_next_setting(hp_setting: Optional[immuneML.hyperparameter_optimization.HPSetting.HPSetting] = None, metric: Optional[dict] = None)[source]

generator function which returns the next hyper-parameter setting to be evaluated :param hp_setting: previous setting (None if it’s the first iteration) :param metric: performance metric from the previous setting per label :return: new hp_setting or None (if the end is reached)

abstract get_all_hps()immuneML.hyperparameter_optimization.HPSettingResult.HPSettingResult[source]
abstract get_optimal_hps()immuneML.hyperparameter_optimization.HPSetting.HPSetting[source]

Module contents