chemprop.nn
#
Subpackages#
Submodules#
Package Contents#
Classes#
An |
|
Average the graph-level representation: |
|
Sum the graph-level representation: |
|
Sum the graph-level representation and divide by a normalization constant: |
|
An |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
Calculate the loss using Eq. 9 from [nix1994] |
|
Calculate the loss using Eqs. 8, 9, and 10 from [amini2020] |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
Calculate a soft Matthews correlation coefficient ([mccWiki]) loss for multiclass |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
Uses the loss function from [sensoy2018] based on the implementation at [sensoyGithub] |
|
Uses the loss function from [sensoy2018] based on the implementation at [sensoyGithub] |
|
Uses the loss function from [sensoy2018] based on the implementation at [sensoyGithub] |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
|
|
|
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
|
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
|
|
|
|
|
|
|
|
|
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
Base class for all neural network modules. |
|
A |
|
A |
|
A |
|
A MulticomponentMessagePassing performs message-passing on each individual input in a |
|
A |
|
A |
|
A |
|
A |
|
A |
|
A |
|
A |
|
A |
|
A |
|
A |
|
Enum where members are also (and must be) strings |
|
Base class for all neural network modules. |
Attributes#
- class chemprop.nn.Aggregation(dim=0, *args, **kwargs)[source]#
Bases:
torch.nn.Module
,chemprop.nn.hparams.HasHParams
An
Aggregation
aggregates the node-level representations of a batch of graphs into a batch of graph-level representationsNote
this class is abstract and cannot be instantiated.
See also
MeanAggregation
,SumAggregation
,NormAggregation
- Parameters:
dim (int)
- abstract forward(H, batch)[source]#
Aggregate the graph-level representations of a batch of graphs into their respective global representations
NOTE: it is possible for a graph to have 0 nodes. In this case, the representation will be a zero vector of length d in the final output.
- Parameters:
H (Tensor) – a tensor of shape
V x d
containing the batched node-level representations ofb
graphsbatch (Tensor) – a tensor of shape
V
containing the index of the graph a given vertex corresponds to
- Returns:
a tensor of shape
b x d
containing the graph-level representations- Return type:
Tensor
- chemprop.nn.AggregationRegistry#
- class chemprop.nn.MeanAggregation(dim=0, *args, **kwargs)[source]#
Bases:
Aggregation
Average the graph-level representation:
\[\mathbf h = \frac{1}{|V|} \sum_{v \in V} \mathbf h_v\]- Parameters:
dim (int)
- forward(H, batch)[source]#
Aggregate the graph-level representations of a batch of graphs into their respective global representations
NOTE: it is possible for a graph to have 0 nodes. In this case, the representation will be a zero vector of length d in the final output.
- Parameters:
H (Tensor) – a tensor of shape
V x d
containing the batched node-level representations ofb
graphsbatch (Tensor) – a tensor of shape
V
containing the index of the graph a given vertex corresponds to
- Returns:
a tensor of shape
b x d
containing the graph-level representations- Return type:
Tensor
- class chemprop.nn.SumAggregation(dim=0, *args, **kwargs)[source]#
Bases:
Aggregation
Sum the graph-level representation:
\[\mathbf h = \sum_{v \in V} \mathbf h_v\]- Parameters:
dim (int)
- forward(H, batch)[source]#
Aggregate the graph-level representations of a batch of graphs into their respective global representations
NOTE: it is possible for a graph to have 0 nodes. In this case, the representation will be a zero vector of length d in the final output.
- Parameters:
H (Tensor) – a tensor of shape
V x d
containing the batched node-level representations ofb
graphsbatch (Tensor) – a tensor of shape
V
containing the index of the graph a given vertex corresponds to
- Returns:
a tensor of shape
b x d
containing the graph-level representations- Return type:
Tensor
- class chemprop.nn.NormAggregation(dim=0, *args, norm=100.0, **kwargs)[source]#
Bases:
SumAggregation
Sum the graph-level representation and divide by a normalization constant:
\[\mathbf h = \frac{1}{c} \sum_{v \in V} \mathbf h_v\]- Parameters:
dim (int)
norm (float)
- forward(H, batch)[source]#
Aggregate the graph-level representations of a batch of graphs into their respective global representations
NOTE: it is possible for a graph to have 0 nodes. In this case, the representation will be a zero vector of length d in the final output.
- Parameters:
H (Tensor) – a tensor of shape
V x d
containing the batched node-level representations ofb
graphsbatch (Tensor) – a tensor of shape
V
containing the index of the graph a given vertex corresponds to
- Returns:
a tensor of shape
b x d
containing the graph-level representations- Return type:
Tensor
- class chemprop.nn.AttentiveAggregation(dim=0, *args, output_size, **kwargs)[source]#
Bases:
Aggregation
An
Aggregation
aggregates the node-level representations of a batch of graphs into a batch of graph-level representationsNote
this class is abstract and cannot be instantiated.
See also
MeanAggregation
,SumAggregation
,NormAggregation
- Parameters:
dim (int)
output_size (int)
- forward(H, batch)[source]#
Aggregate the graph-level representations of a batch of graphs into their respective global representations
NOTE: it is possible for a graph to have 0 nodes. In this case, the representation will be a zero vector of length d in the final output.
- Parameters:
H (Tensor) – a tensor of shape
V x d
containing the batched node-level representations ofb
graphsbatch (Tensor) – a tensor of shape
V
containing the index of the graph a given vertex corresponds to
- Returns:
a tensor of shape
b x d
containing the graph-level representations- Return type:
Tensor
- class chemprop.nn.LossFunction(task_weights=1.0)[source]#
Bases:
torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (numpy.typing.ArrayLike)
- forward(preds, targets, mask, weights, lt_mask, gt_mask)[source]#
Calculate the mean loss function value given predicted and target values
- Parameters:
preds (Tensor) – a tensor of shape b x (t * s) (regression), b x t (binary classification), or b x t x c (multiclass classification) containing the predictions, where b is the batch size, t is the number of tasks to predict, s is the number of targets to predict for each task, and c is the number of classes.
targets (Tensor) – a float tensor of shape b x t containing the target values
mask (Tensor) – a boolean tensor of shape b x t indicating whether the given prediction should be included in the loss calculation
weights (Tensor) – a tensor of shape b or b x 1 containing the per-sample weight
lt_mask (Tensor)
gt_mask (Tensor)
- Returns:
a scalar containing the fully reduced loss
- Return type:
Tensor
- chemprop.nn.LossFunctionRegistry#
- class chemprop.nn.MSELoss(task_weights=1.0)[source]#
Bases:
LossFunction
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (numpy.typing.ArrayLike)
- class chemprop.nn.BoundedMSELoss(task_weights=1.0)[source]#
Bases:
MSELoss
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (numpy.typing.ArrayLike)
- class chemprop.nn.MVELoss(task_weights=1.0)[source]#
Bases:
LossFunction
Calculate the loss using Eq. 9 from [nix1994]
References
[nix1994] (1,2)Nix, D. A.; Weigend, A. S. “Estimating the mean and variance of the target probability distribution.” Proceedings of 1994 IEEE International Conference on Neural Networks, 1994 https://doi.org/10.1109/icnn.1994.374138
- Parameters:
task_weights (numpy.typing.ArrayLike)
- class chemprop.nn.EvidentialLoss(task_weights=None, v_kl=0.2, eps=1e-08)[source]#
Bases:
LossFunction
Calculate the loss using Eqs. 8, 9, and 10 from [amini2020]
References
[amini2020] (1,2)Amini, A; Schwarting, W.; Soleimany, A.; Rus, D.; “Deep Evidential Regression” Advances in Neural Information Processing Systems;2020; Vol.33. https://proceedings.neurips.cc/paper_files/paper/2020/file/aab085461de182608ee9f607f3f7d18f-Paper.pdf
[soleimany2021]Soleimany, A.P.; Amini, A.; Goldman, S.; Rus, D.; Bhatia, S.N.; Coley, C.W.; “Evidential Deep Learning for Guided Molecular Property Prediction and Discovery.” ACS Cent. Sci. 2021, 7, 8, 1356-1367. https://doi.org/10.1021/acscentsci.1c00546
- Parameters:
task_weights (torch.Tensor | None)
v_kl (float)
eps (float)
- class chemprop.nn.BCELoss(task_weights=1.0)[source]#
Bases:
LossFunction
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (numpy.typing.ArrayLike)
- class chemprop.nn.CrossEntropyLoss(task_weights=1.0)[source]#
Bases:
LossFunction
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (numpy.typing.ArrayLike)
- class chemprop.nn.MccMixin[source]#
Calculate a soft Matthews correlation coefficient ([mccWiki]) loss for multiclass classification based on the implementataion of [mccSklearn]
References
- class chemprop.nn.BinaryMCCLoss(task_weights=1.0)[source]#
Bases:
LossFunction
,MccMixin
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (numpy.typing.ArrayLike)
- class chemprop.nn.MulticlassMCCLoss(task_weights=1.0)[source]#
Bases:
LossFunction
,MccMixin
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (numpy.typing.ArrayLike)
- class chemprop.nn.DirichletMixin(task_weights=None, v_kl=0.2)[source]#
Uses the loss function from [sensoy2018] based on the implementation at [sensoyGithub]
References
[sensoy2018]Sensoy, M.; Kaplan, L.; Kandemir, M. “Evidential deep learning to quantify classification uncertainty.” NeurIPS, 2018, 31. https://doi.org/10.48550/arXiv.1806.01768
- Parameters:
task_weights (torch.Tensor | None)
v_kl (float)
- class chemprop.nn.BinaryDirichletLoss(task_weights=None, v_kl=0.2)[source]#
Bases:
DirichletMixin
,LossFunction
Uses the loss function from [sensoy2018] based on the implementation at [sensoyGithub]
References
[sensoy2018]Sensoy, M.; Kaplan, L.; Kandemir, M. “Evidential deep learning to quantify classification uncertainty.” NeurIPS, 2018, 31. https://doi.org/10.48550/arXiv.1806.01768
- Parameters:
task_weights (torch.Tensor | None)
v_kl (float)
- class chemprop.nn.MulticlassDirichletLoss(task_weights=None, v_kl=0.2)[source]#
Bases:
DirichletMixin
,LossFunction
Uses the loss function from [sensoy2018] based on the implementation at [sensoyGithub]
References
[sensoy2018]Sensoy, M.; Kaplan, L.; Kandemir, M. “Evidential deep learning to quantify classification uncertainty.” NeurIPS, 2018, 31. https://doi.org/10.48550/arXiv.1806.01768
- Parameters:
task_weights (torch.Tensor | None)
v_kl (float)
- class chemprop.nn.SIDLoss(task_weights=None, threshold=None)[source]#
Bases:
LossFunction
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (torch.Tensor | None)
threshold (float | None)
- class chemprop.nn.WassersteinLoss(task_weights=None, threshold=None)[source]#
Bases:
LossFunction
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (torch.Tensor | None)
threshold (float | None)
- class chemprop.nn.Metric(task_weights=1.0)[source]#
Bases:
chemprop.nn.loss.LossFunction
- Parameters:
task_weights (ArrayLike = 1.0) –
Important
Ignored. Maintained for compatibility with
LossFunction
- minimize: bool = True#
- forward(preds, targets, mask, weights, lt_mask, gt_mask)[source]#
Calculate the mean loss function value given predicted and target values
- Parameters:
preds (Tensor) – a tensor of shape b x (t * s) (regression), b x t (binary classification), or b x t x c (multiclass classification) containing the predictions, where b is the batch size, t is the number of tasks to predict, s is the number of targets to predict for each task, and c is the number of classes.
targets (Tensor) – a float tensor of shape b x t containing the target values
mask (Tensor) – a boolean tensor of shape b x t indicating whether the given prediction should be included in the loss calculation
weights (Tensor) – a tensor of shape b or b x 1 containing the per-sample weight
lt_mask (Tensor)
gt_mask (Tensor)
- Returns:
a scalar containing the fully reduced loss
- Return type:
Tensor
- chemprop.nn.MetricRegistry#
- class chemprop.nn.MAEMetric(task_weights=1.0)[source]#
Bases:
Metric
- Parameters:
task_weights (ArrayLike = 1.0) –
Important
Ignored. Maintained for compatibility with
LossFunction
- class chemprop.nn.MSEMetric(task_weights=1.0)[source]#
Bases:
chemprop.nn.loss.MSELoss
,Metric
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (numpy.typing.ArrayLike)
- class chemprop.nn.RMSEMetric(task_weights=1.0)[source]#
Bases:
MSEMetric
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (numpy.typing.ArrayLike)
- forward(preds, targets, mask, weights, lt_mask, gt_mask)[source]#
Calculate the mean loss function value given predicted and target values
- Parameters:
preds (Tensor) – a tensor of shape b x (t * s) (regression), b x t (binary classification), or b x t x c (multiclass classification) containing the predictions, where b is the batch size, t is the number of tasks to predict, s is the number of targets to predict for each task, and c is the number of classes.
targets (Tensor) – a float tensor of shape b x t containing the target values
mask (Tensor) – a boolean tensor of shape b x t indicating whether the given prediction should be included in the loss calculation
weights (Tensor) – a tensor of shape b or b x 1 containing the per-sample weight
lt_mask (Tensor)
gt_mask (Tensor)
- Returns:
a scalar containing the fully reduced loss
- Return type:
Tensor
- class chemprop.nn.BoundedMAEMetric(task_weights=1.0)[source]#
Bases:
MAEMetric
,BoundedMixin
- Parameters:
task_weights (ArrayLike = 1.0) –
Important
Ignored. Maintained for compatibility with
LossFunction
- class chemprop.nn.BoundedMSEMetric(task_weights=1.0)[source]#
Bases:
MSEMetric
,BoundedMixin
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (numpy.typing.ArrayLike)
- class chemprop.nn.BoundedRMSEMetric(task_weights=1.0)[source]#
Bases:
RMSEMetric
,BoundedMixin
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (numpy.typing.ArrayLike)
- class chemprop.nn.R2Metric(task_weights=1.0)[source]#
Bases:
Metric
- Parameters:
task_weights (ArrayLike = 1.0) –
Important
Ignored. Maintained for compatibility with
LossFunction
- minimize = False#
- forward(preds, targets, mask, *args, **kwargs)[source]#
Calculate the mean loss function value given predicted and target values
- Parameters:
preds (Tensor) – a tensor of shape b x (t * s) (regression), b x t (binary classification), or b x t x c (multiclass classification) containing the predictions, where b is the batch size, t is the number of tasks to predict, s is the number of targets to predict for each task, and c is the number of classes.
targets (Tensor) – a float tensor of shape b x t containing the target values
mask (Tensor) – a boolean tensor of shape b x t indicating whether the given prediction should be included in the loss calculation
weights (Tensor) – a tensor of shape b or b x 1 containing the per-sample weight
lt_mask (Tensor)
gt_mask (Tensor)
- Returns:
a scalar containing the fully reduced loss
- Return type:
Tensor
- class chemprop.nn.BinaryAUROCMetric(task_weights=1.0)[source]#
Bases:
Metric
- Parameters:
task_weights (ArrayLike = 1.0) –
Important
Ignored. Maintained for compatibility with
LossFunction
- minimize = False#
- forward(preds, targets, mask, *args, **kwargs)[source]#
Calculate the mean loss function value given predicted and target values
- Parameters:
preds (Tensor) – a tensor of shape b x (t * s) (regression), b x t (binary classification), or b x t x c (multiclass classification) containing the predictions, where b is the batch size, t is the number of tasks to predict, s is the number of targets to predict for each task, and c is the number of classes.
targets (Tensor) – a float tensor of shape b x t containing the target values
mask (Tensor) – a boolean tensor of shape b x t indicating whether the given prediction should be included in the loss calculation
weights (Tensor) – a tensor of shape b or b x 1 containing the per-sample weight
lt_mask (Tensor)
gt_mask (Tensor)
- Returns:
a scalar containing the fully reduced loss
- Return type:
Tensor
- class chemprop.nn.BinaryAUPRCMetric(task_weights=1.0)[source]#
Bases:
Metric
- Parameters:
task_weights (ArrayLike = 1.0) –
Important
Ignored. Maintained for compatibility with
LossFunction
- minimize = False#
- forward(preds, targets, *args, **kwargs)[source]#
Calculate the mean loss function value given predicted and target values
- Parameters:
preds (Tensor) – a tensor of shape b x (t * s) (regression), b x t (binary classification), or b x t x c (multiclass classification) containing the predictions, where b is the batch size, t is the number of tasks to predict, s is the number of targets to predict for each task, and c is the number of classes.
targets (Tensor) – a float tensor of shape b x t containing the target values
mask (Tensor) – a boolean tensor of shape b x t indicating whether the given prediction should be included in the loss calculation
weights (Tensor) – a tensor of shape b or b x 1 containing the per-sample weight
lt_mask (Tensor)
gt_mask (Tensor)
- Returns:
a scalar containing the fully reduced loss
- Return type:
Tensor
- class chemprop.nn.BinaryAccuracyMetric(task_weights=1.0)[source]#
Bases:
Metric
,ThresholdedMixin
- Parameters:
task_weights (ArrayLike = 1.0) –
Important
Ignored. Maintained for compatibility with
LossFunction
- minimize = False#
- forward(preds, targets, mask, *args, **kwargs)[source]#
Calculate the mean loss function value given predicted and target values
- Parameters:
preds (Tensor) – a tensor of shape b x (t * s) (regression), b x t (binary classification), or b x t x c (multiclass classification) containing the predictions, where b is the batch size, t is the number of tasks to predict, s is the number of targets to predict for each task, and c is the number of classes.
targets (Tensor) – a float tensor of shape b x t containing the target values
mask (Tensor) – a boolean tensor of shape b x t indicating whether the given prediction should be included in the loss calculation
weights (Tensor) – a tensor of shape b or b x 1 containing the per-sample weight
lt_mask (Tensor)
gt_mask (Tensor)
- Returns:
a scalar containing the fully reduced loss
- Return type:
Tensor
- class chemprop.nn.BinaryF1Metric(task_weights=1.0)[source]#
Bases:
Metric
,ThresholdedMixin
- Parameters:
task_weights (ArrayLike = 1.0) –
Important
Ignored. Maintained for compatibility with
LossFunction
- minimize = False#
- forward(preds, targets, mask, *args, **kwargs)[source]#
Calculate the mean loss function value given predicted and target values
- Parameters:
preds (Tensor) – a tensor of shape b x (t * s) (regression), b x t (binary classification), or b x t x c (multiclass classification) containing the predictions, where b is the batch size, t is the number of tasks to predict, s is the number of targets to predict for each task, and c is the number of classes.
targets (Tensor) – a float tensor of shape b x t containing the target values
mask (Tensor) – a boolean tensor of shape b x t indicating whether the given prediction should be included in the loss calculation
weights (Tensor) – a tensor of shape b or b x 1 containing the per-sample weight
lt_mask (Tensor)
gt_mask (Tensor)
- Returns:
a scalar containing the fully reduced loss
- Return type:
Tensor
- class chemprop.nn.BCEMetric(task_weights=1.0)[source]#
Bases:
chemprop.nn.loss.BCELoss
,Metric
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (numpy.typing.ArrayLike)
- class chemprop.nn.CrossEntropyMetric(task_weights=1.0)[source]#
Bases:
chemprop.nn.loss.CrossEntropyLoss
,Metric
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (numpy.typing.ArrayLike)
- class chemprop.nn.BinaryMCCMetric(task_weights=1.0)[source]#
Bases:
chemprop.nn.loss.BinaryMCCLoss
,Metric
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (numpy.typing.ArrayLike)
- class chemprop.nn.MulticlassMCCMetric(task_weights=1.0)[source]#
Bases:
chemprop.nn.loss.MulticlassMCCLoss
,Metric
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (numpy.typing.ArrayLike)
- class chemprop.nn.SIDMetric(task_weights=None, threshold=None)[source]#
Bases:
chemprop.nn.loss.SIDLoss
,Metric
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (torch.Tensor | None)
threshold (float | None)
- class chemprop.nn.WassersteinMetric(task_weights=None, threshold=None)[source]#
Bases:
chemprop.nn.loss.WassersteinLoss
,Metric
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
task_weights (torch.Tensor | None)
threshold (float | None)
- class chemprop.nn.MessagePassing(*args, **kwargs)[source]#
Bases:
torch.nn.Module
,chemprop.nn.hparams.HasHParams
A
MessagePassing
module encodes a batch of molecular graphs using message passing to learn vertex-level hidden representations.- input_dim: int#
- output_dim: int#
- abstract forward(bmg, V_d=None)[source]#
Encode a batch of molecular graphs.
- Parameters:
bmg (BatchMolGraph) – the batch of
MolGraph
s to encodeV_d (Tensor | None, default=None) – an optional tensor of shape V x d_vd containing additional descriptors for each atom in the batch. These will be concatenated to the learned atomic descriptors and transformed before the readout phase.
- Returns:
a tensor of shape V x d_h or V x (d_h + d_vd) containing the hidden representation of each vertex in the batch of graphs. The feature dimension depends on whether additional atom descriptors were provided
- Return type:
Tensor
- class chemprop.nn.AtomMessagePassing(d_v=DEFAULT_ATOM_FDIM, d_e=DEFAULT_BOND_FDIM, d_h=DEFAULT_HIDDEN_DIM, bias=False, depth=3, dropout=0.0, activation=Activation.RELU, undirected=False, d_vd=None, V_d_transform=None, graph_transform=None)[source]#
Bases:
_MessagePassingBase
A
AtomMessagePassing
encodes a batch of molecular graphs by passing messages along atoms.It implements the following operation:
\[\begin{split}h_v^{(0)} &= \tau \left( \mathbf{W}_i(x_v) \right) \\ m_v^{(t)} &= \sum_{u \in \mathcal{N}(v)} h_u^{(t-1)} \mathbin\Vert e_{uv} \\ h_v^{(t)} &= \tau\left(h_v^{(0)} + \mathbf{W}_h m_v^{(t-1)}\right) \\ m_v^{(T)} &= \sum_{w \in \mathcal{N}(v)} h_w^{(T-1)} \\ h_v^{(T)} &= \tau \left (\mathbf{W}_o \left( x_v \mathbin\Vert m_{v}^{(T)} \right) \right),\end{split}\]where \(\tau\) is the activation function; \(\mathbf{W}_i\), \(\mathbf{W}_h\), and \(\mathbf{W}_o\) are learned weight matrices; \(e_{vw}\) is the feature vector of the bond between atoms \(v\) and \(w\); \(x_v\) is the feature vector of atom \(v\); \(h_v^{(t)}\) is the hidden representation of atom \(v\) at iteration \(t\); \(m_v^{(t)}\) is the message received by atom \(v\) at iteration \(t\); and \(t \in \{1, \dots, T\}\) is the number of message passing iterations.
- Parameters:
d_v (int)
d_e (int)
d_h (int)
bias (bool)
depth (int)
dropout (float)
activation (str | chemprop.nn.utils.Activation)
undirected (bool)
d_vd (int | None)
V_d_transform (chemprop.nn.transforms.ScaleTransform | None)
graph_transform (chemprop.nn.transforms.GraphTransform | None)
- setup(d_v=DEFAULT_ATOM_FDIM, d_e=DEFAULT_BOND_FDIM, d_h=DEFAULT_HIDDEN_DIM, d_vd=None, bias=False)[source]#
setup the weight matrices used in the message passing update functions
- Parameters:
d_v (int) – the vertex feature dimension
d_e (int) – the edge feature dimension
d_h (int, default=300) – the hidden dimension during message passing
d_vd (int | None, default=None) – the dimension of additional vertex descriptors that will be concatenated to the hidden features before readout, if any
bias (bool, default=False) – whether to add a learned bias to the matrices
- Returns:
W_i, W_h, W_o, W_d – the input, hidden, output, and descriptor weight matrices, respectively, used in the message passing update functions. The descriptor weight matrix is None if no vertex dimension is supplied
- Return type:
tuple[nn.Module, nn.Module, nn.Module, nn.Module | None]
- initialize(bmg)[source]#
initialize the message passing scheme by calculating initial matrix of hidden features
- Parameters:
- Return type:
torch.Tensor
- class chemprop.nn.BondMessagePassing(d_v=DEFAULT_ATOM_FDIM, d_e=DEFAULT_BOND_FDIM, d_h=DEFAULT_HIDDEN_DIM, bias=False, depth=3, dropout=0.0, activation=Activation.RELU, undirected=False, d_vd=None, V_d_transform=None, graph_transform=None)[source]#
Bases:
_MessagePassingBase
A
BondMessagePassing
encodes a batch of molecular graphs by passing messages along directed bonds.It implements the following operation:
\[\begin{split}h_{vw}^{(0)} &= \tau \left( \mathbf W_i(e_{vw}) \right) \\ m_{vw}^{(t)} &= \sum_{u \in \mathcal N(v)\setminus w} h_{uv}^{(t-1)} \\ h_{vw}^{(t)} &= \tau \left(h_v^{(0)} + \mathbf W_h m_{vw}^{(t-1)} \right) \\ m_v^{(T)} &= \sum_{w \in \mathcal N(v)} h_w^{(T-1)} \\ h_v^{(T)} &= \tau \left (\mathbf W_o \left( x_v \mathbin\Vert m_{v}^{(T)} \right) \right),\end{split}\]where \(\tau\) is the activation function; \(\mathbf W_i\), \(\mathbf W_h\), and \(\mathbf W_o\) are learned weight matrices; \(e_{vw}\) is the feature vector of the bond between atoms \(v\) and \(w\); \(x_v\) is the feature vector of atom \(v\); \(h_{vw}^{(t)}\) is the hidden representation of the bond \(v \rightarrow w\) at iteration \(t\); \(m_{vw}^{(t)}\) is the message received by the bond \(v \to w\) at iteration \(t\); and \(t \in \{1, \dots, T-1\}\) is the number of message passing iterations.
- Parameters:
d_v (int)
d_e (int)
d_h (int)
bias (bool)
depth (int)
dropout (float)
activation (str | chemprop.nn.utils.Activation)
undirected (bool)
d_vd (int | None)
V_d_transform (chemprop.nn.transforms.ScaleTransform | None)
graph_transform (chemprop.nn.transforms.GraphTransform | None)
- setup(d_v=DEFAULT_ATOM_FDIM, d_e=DEFAULT_BOND_FDIM, d_h=DEFAULT_HIDDEN_DIM, d_vd=None, bias=False)[source]#
setup the weight matrices used in the message passing update functions
- Parameters:
d_v (int) – the vertex feature dimension
d_e (int) – the edge feature dimension
d_h (int, default=300) – the hidden dimension during message passing
d_vd (int | None, default=None) – the dimension of additional vertex descriptors that will be concatenated to the hidden features before readout, if any
bias (bool, default=False) – whether to add a learned bias to the matrices
- Returns:
W_i, W_h, W_o, W_d – the input, hidden, output, and descriptor weight matrices, respectively, used in the message passing update functions. The descriptor weight matrix is None if no vertex dimension is supplied
- Return type:
tuple[nn.Module, nn.Module, nn.Module, nn.Module | None]
- initialize(bmg)[source]#
initialize the message passing scheme by calculating initial matrix of hidden features
- Parameters:
- Return type:
torch.Tensor
- class chemprop.nn.MulticomponentMessagePassing(blocks, n_components, shared=False)[source]#
Bases:
torch.nn.Module
,chemprop.nn.hparams.HasHParams
A MulticomponentMessagePassing performs message-passing on each individual input in a multicomponent input then concatenates the representation of each input to construct a global representation
- Parameters:
blocks (Sequence[MessagePassing]) – the invidual message-passing blocks for each input
n_components (int) – the number of components in each input
shared (bool, default=False) – whether one block will be shared among all components in an input. If not, a separate block will be learned for each component.
- property output_dim: int#
- Return type:
int
- forward(bmgs, V_ds)[source]#
Encode the multicomponent inputs
- Parameters:
bmgs (Iterable[BatchMolGraph])
V_ds (Iterable[Tensor | None])
- Returns:
a list of tensors of shape V x d_i containing the respective encodings of the i h component, where d_i is the output dimension of the i h encoder
- Return type:
list[Tensor]
- class chemprop.nn.Predictor(*args, **kwargs)[source]#
Bases:
torch.nn.Module
,chemprop.nn.hparams.HasHParams
A
Predictor
is a protocol that defines a differentiable function \(f\) : mathbb R^d mapsto mathbb R^o- input_dim: int#
the input dimension
- output_dim: int#
the output dimension
- n_tasks: int#
the number of tasks t to predict for each input
- n_targets: int#
the number of targets s to predict for each task t
- criterion: chemprop.nn.loss.LossFunction#
the loss function to use for training
- task_weights: torch.Tensor#
the weights to apply to each task when calculating the loss
- output_transform: chemprop.nn.transforms.UnscaleTransform#
the transform to apply to the output of the predictor
- abstract encode(Z, i)[source]#
Calculate the
i
-th hidden representation- Parameters:
Z (Tensor) – a tensor of shape
n x d
containing the input data to encode, whered
is the input dimensionality.i (int) –
The stop index of slice of the MLP used to encode the input. That is, use all layers in the MLP _up to_
i
(i.e.,MLP[:i]
). This can be any integer value, and the behavior of this function is dependent on the underlying list slicing behavior. For example:i=0
: use a 0-layer MLP (i.e., a no-op)i=1
: use only the first blocki=-1
: use _up to_ the final block
- Returns:
a tensor of shape
n x h
containing thei
-th hidden representation, whereh
is the number of neurons in thei
-th hidden layer.- Return type:
Tensor
- chemprop.nn.PredictorRegistry#
- class chemprop.nn.RegressionFFN(n_tasks=1, input_dim=DEFAULT_HIDDEN_DIM, hidden_dim=300, n_layers=1, dropout=0.0, activation='relu', criterion=None, task_weights=None, threshold=None, output_transform=None)[source]#
Bases:
_FFNPredictorBase
A
_FFNPredictorBase
is the base class for allPredictor
s that use an underlyingSimpleFFN
to map the learned fingerprint to the desired output.- Parameters:
n_tasks (int)
input_dim (int)
hidden_dim (int)
n_layers (int)
dropout (float)
activation (str)
criterion (chemprop.nn.loss.LossFunction | None)
task_weights (torch.Tensor | None)
threshold (float | None)
output_transform (chemprop.nn.transforms.UnscaleTransform | None)
- n_targets = 1#
- class chemprop.nn.MveFFN(n_tasks=1, input_dim=DEFAULT_HIDDEN_DIM, hidden_dim=300, n_layers=1, dropout=0.0, activation='relu', criterion=None, task_weights=None, threshold=None, output_transform=None)[source]#
Bases:
RegressionFFN
A
_FFNPredictorBase
is the base class for allPredictor
s that use an underlyingSimpleFFN
to map the learned fingerprint to the desired output.- Parameters:
n_tasks (int)
input_dim (int)
hidden_dim (int)
n_layers (int)
dropout (float)
activation (str)
criterion (chemprop.nn.loss.LossFunction | None)
task_weights (torch.Tensor | None)
threshold (float | None)
output_transform (chemprop.nn.transforms.UnscaleTransform | None)
- n_targets = 2#
- class chemprop.nn.EvidentialFFN(n_tasks=1, input_dim=DEFAULT_HIDDEN_DIM, hidden_dim=300, n_layers=1, dropout=0.0, activation='relu', criterion=None, task_weights=None, threshold=None, output_transform=None)[source]#
Bases:
RegressionFFN
A
_FFNPredictorBase
is the base class for allPredictor
s that use an underlyingSimpleFFN
to map the learned fingerprint to the desired output.- Parameters:
n_tasks (int)
input_dim (int)
hidden_dim (int)
n_layers (int)
dropout (float)
activation (str)
criterion (chemprop.nn.loss.LossFunction | None)
task_weights (torch.Tensor | None)
threshold (float | None)
output_transform (chemprop.nn.transforms.UnscaleTransform | None)
- n_targets = 4#
- class chemprop.nn.BinaryClassificationFFNBase(n_tasks=1, input_dim=DEFAULT_HIDDEN_DIM, hidden_dim=300, n_layers=1, dropout=0.0, activation='relu', criterion=None, task_weights=None, threshold=None, output_transform=None)[source]#
Bases:
_FFNPredictorBase
A
_FFNPredictorBase
is the base class for allPredictor
s that use an underlyingSimpleFFN
to map the learned fingerprint to the desired output.- Parameters:
n_tasks (int)
input_dim (int)
hidden_dim (int)
n_layers (int)
dropout (float)
activation (str)
criterion (chemprop.nn.loss.LossFunction | None)
task_weights (torch.Tensor | None)
threshold (float | None)
output_transform (chemprop.nn.transforms.UnscaleTransform | None)
- class chemprop.nn.BinaryClassificationFFN(n_tasks=1, input_dim=DEFAULT_HIDDEN_DIM, hidden_dim=300, n_layers=1, dropout=0.0, activation='relu', criterion=None, task_weights=None, threshold=None, output_transform=None)[source]#
Bases:
BinaryClassificationFFNBase
A
_FFNPredictorBase
is the base class for allPredictor
s that use an underlyingSimpleFFN
to map the learned fingerprint to the desired output.- Parameters:
n_tasks (int)
input_dim (int)
hidden_dim (int)
n_layers (int)
dropout (float)
activation (str)
criterion (chemprop.nn.loss.LossFunction | None)
task_weights (torch.Tensor | None)
threshold (float | None)
output_transform (chemprop.nn.transforms.UnscaleTransform | None)
- n_targets = 1#
- class chemprop.nn.BinaryDirichletFFN(n_tasks=1, input_dim=DEFAULT_HIDDEN_DIM, hidden_dim=300, n_layers=1, dropout=0.0, activation='relu', criterion=None, task_weights=None, threshold=None, output_transform=None)[source]#
Bases:
BinaryClassificationFFNBase
A
_FFNPredictorBase
is the base class for allPredictor
s that use an underlyingSimpleFFN
to map the learned fingerprint to the desired output.- Parameters:
n_tasks (int)
input_dim (int)
hidden_dim (int)
n_layers (int)
dropout (float)
activation (str)
criterion (chemprop.nn.loss.LossFunction | None)
task_weights (torch.Tensor | None)
threshold (float | None)
output_transform (chemprop.nn.transforms.UnscaleTransform | None)
- n_targets = 2#
- class chemprop.nn.MulticlassClassificationFFN(n_classes, n_tasks=1, input_dim=DEFAULT_HIDDEN_DIM, hidden_dim=300, n_layers=1, dropout=0.0, activation='relu', criterion=None, task_weights=None, threshold=None, output_transform=None)[source]#
Bases:
_FFNPredictorBase
A
_FFNPredictorBase
is the base class for allPredictor
s that use an underlyingSimpleFFN
to map the learned fingerprint to the desired output.- Parameters:
n_classes (int)
n_tasks (int)
input_dim (int)
hidden_dim (int)
n_layers (int)
dropout (float)
activation (str)
criterion (chemprop.nn.loss.LossFunction | None)
task_weights (torch.Tensor | None)
threshold (float | None)
output_transform (chemprop.nn.transforms.UnscaleTransform | None)
- n_targets = 1#
- class chemprop.nn.MulticlassDirichletFFN(n_classes, n_tasks=1, input_dim=DEFAULT_HIDDEN_DIM, hidden_dim=300, n_layers=1, dropout=0.0, activation='relu', criterion=None, task_weights=None, threshold=None, output_transform=None)[source]#
Bases:
MulticlassClassificationFFN
A
_FFNPredictorBase
is the base class for allPredictor
s that use an underlyingSimpleFFN
to map the learned fingerprint to the desired output.- Parameters:
n_classes (int)
n_tasks (int)
input_dim (int)
hidden_dim (int)
n_layers (int)
dropout (float)
activation (str)
criterion (chemprop.nn.loss.LossFunction | None)
task_weights (torch.Tensor | None)
threshold (float | None)
output_transform (chemprop.nn.transforms.UnscaleTransform | None)
- class chemprop.nn.SpectralFFN(*args, spectral_activation='softplus', **kwargs)[source]#
Bases:
_FFNPredictorBase
A
_FFNPredictorBase
is the base class for allPredictor
s that use an underlyingSimpleFFN
to map the learned fingerprint to the desired output.- Parameters:
spectral_activation (str | None)
- n_targets = 1#
- train_step#
- class chemprop.nn.Activation[source]#
Bases:
chemprop.utils.utils.EnumMapping
Enum where members are also (and must be) strings
- RELU#
- LEAKYRELU#
- PRELU#
- TANH#
- SELU#
- ELU#
- class chemprop.nn.UnscaleTransform(mean, scale, pad=0)[source]#
Bases:
_ScaleTransformMixin
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- Parameters:
mean (numpy.typing.ArrayLike)
scale (numpy.typing.ArrayLike)
pad (int)