Neural Network Utility Functions
chemprop.nn_utils.py contains utility funtions specific to neural networks.
- class chemprop.nn_utils.NoamLR(optimizer: Optimizer, warmup_epochs: List[float | int], total_epochs: List[int], steps_per_epoch: int, init_lr: List[float], max_lr: List[float], final_lr: List[float])[source]
Noam learning rate scheduler with piecewise linear increase and exponential decay.
The learning rate increases linearly from init_lr to max_lr over the course of the first warmup_steps (where
warmup_steps = warmup_epochs * steps_per_epoch
). Then the learning rate decreases exponentially frommax_lr
tofinal_lr
over the course of the remainingtotal_steps - warmup_steps
(wheretotal_steps = total_epochs * steps_per_epoch
). This is roughly based on the learning rate schedule from Attention is All You Need, section 5.3.- Parameters:
optimizer – A PyTorch optimizer.
warmup_epochs – The number of epochs during which to linearly increase the learning rate.
total_epochs – The total number of epochs.
steps_per_epoch – The number of steps (batches) per epoch.
init_lr – The initial learning rate.
max_lr – The maximum learning rate (achieved after
warmup_epochs
).final_lr – The final learning rate (achieved after
total_epochs
).
- chemprop.nn_utils.activate_dropout(module: Module, dropout_prob: float)[source]
Set p of dropout layers and set to train mode during inference for uncertainty estimation.
- Parameters:
model – A
MoleculeModel
.dropout_prob – A float on (0,1) indicating the dropout probability.
- chemprop.nn_utils.compute_gnorm(model: Module) float [source]
Computes the norm of the gradients of a model.
- Parameters:
model – A PyTorch model.
- Returns:
The norm of the gradients of the model.
- chemprop.nn_utils.compute_pnorm(model: Module) float [source]
Computes the norm of the parameters of a model.
- Parameters:
model – A PyTorch model.
- Returns:
The norm of the parameters of the model.
- chemprop.nn_utils.get_activation_function(activation: str) Module [source]
Gets an activation function module given the name of the activation.
Supports:
ReLU
LeakyReLU
PReLU
tanh
SELU
ELU
- Parameters:
activation – The name of the activation function.
- Returns:
The activation function module.
- chemprop.nn_utils.index_select_ND(source: Tensor, index: Tensor) Tensor [source]
Selects the message features from source corresponding to the atom or bond indices in
index
.- Parameters:
source – A tensor of shape
(num_bonds, hidden_size)
containing message features.index – A tensor of shape
(num_atoms/num_bonds, max_num_bonds)
containing the atom or bond indices to select fromsource
.
- Returns:
A tensor of shape
(num_atoms/num_bonds, max_num_bonds, hidden_size)
containing the message features corresponding to the atoms/bonds specified in index.
- chemprop.nn_utils.initialize_weights(model: Module) None [source]
Initializes the weights of a model in place.
- Parameters:
model – An PyTorch model.