immuneML.ml_methods.generative_models.progen package

Submodules

immuneML.ml_methods.generative_models.progen.ProGenConfig module

class immuneML.ml_methods.generative_models.progen.ProGenConfig.ProGenConfig(vocab_size=50400, n_positions=2048, n_ctx=2048, n_embd=4096, n_layer=28, n_head=16, rotary_dim=64, n_inner=None, activation_function='gelu_new', resid_pdrop=0.0, embd_pdrop=0.0, attn_pdrop=0.0, layer_norm_epsilon=1e-05, initializer_range=0.02, scale_attn_weights=True, gradient_checkpointing=False, use_cache=True, bos_token_id=50256, eos_token_id=50256, **kwargs)[source]

Bases: PretrainedConfig

property hidden_size
property max_position_embeddings
model_type: str = 'progen'
property num_attention_heads
property num_hidden_layers

immuneML.ml_methods.generative_models.progen.ProGenForCausalLM module

class immuneML.ml_methods.generative_models.progen.ProGenForCausalLM.ProGenAttention(config)[source]

Bases: Module

forward(hidden_states, attention_mask=None, layer_past=None, head_mask=None, use_cache=False, output_attentions=False)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class immuneML.ml_methods.generative_models.progen.ProGenForCausalLM.ProGenBlock(config)[source]

Bases: Module

forward(hidden_states, layer_past=None, attention_mask=None, head_mask=None, use_cache=False, output_attentions=False)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class immuneML.ml_methods.generative_models.progen.ProGenForCausalLM.ProGenForCausalLM(config)[source]

Bases: ProGenPreTrainedModel, GenerationMixin

deparallelize()[source]
forward(input_ids=None, past_key_values=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]
labels (torch.LongTensor of shape (batch_size, sequence_length), optional):

Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]

get_output_embeddings()[source]

Returns the model’s output embeddings.

Returns:

A torch module mapping hidden states to vocabulary.

Return type:

nn.Module

parallelize(device_map=None)[source]
prepare_inputs_for_generation(input_ids, past=None, **kwargs)[source]

Prepare the model inputs for generation. In includes operations like computing the 4D attention mask or slicing inputs given the existing cache.

See the forward pass in the model documentation for expected arguments (different models might have different requirements for e.g. past_key_values). This function should work as is for most LLMs.

set_output_embeddings(new_embeddings)[source]
class immuneML.ml_methods.generative_models.progen.ProGenForCausalLM.ProGenMLP(intermediate_size, config)[source]

Bases: Module

forward(hidden_states)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class immuneML.ml_methods.generative_models.progen.ProGenForCausalLM.ProGenModel(config)[source]

Bases: ProGenPreTrainedModel

deparallelize()[source]
forward(input_ids=None, past_key_values=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_input_embeddings()[source]

Returns the model’s input embeddings.

Returns:

A torch module mapping vocabulary to hidden states.

Return type:

nn.Module

parallelize(device_map=None)[source]
set_input_embeddings(new_embeddings)[source]

Set model’s input embeddings.

Parameters:

value (nn.Module) – A module mapping vocabulary to hidden states.

class immuneML.ml_methods.generative_models.progen.ProGenForCausalLM.ProGenPreTrainedModel(*inputs, **kwargs)[source]

Bases: PreTrainedModel

An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models.

base_model_prefix = 'transformer'
config_class

alias of ProGenConfig

is_parallelizable = True
immuneML.ml_methods.generative_models.progen.ProGenForCausalLM.apply_rotary_pos_emb(x, sincos, offset=0)[source]
immuneML.ml_methods.generative_models.progen.ProGenForCausalLM.fixed_pos_embedding(x, seq_dim=1, seq_len=None)[source]
immuneML.ml_methods.generative_models.progen.ProGenForCausalLM.rotate_every_two(x)[source]

Module contents