chemprop.nn.loss#

Module Contents#

Classes#

LossFunction

Base class for all neural network modules.

MSELoss

Base class for all neural network modules.

BoundedMSELoss

Base class for all neural network modules.

MVELoss

Calculate the loss using Eq. 9 from [nix1994]

EvidentialLoss

Calculate the loss using Eqs. 8, 9, and 10 from [amini2020]

BCELoss

Base class for all neural network modules.

CrossEntropyLoss

Base class for all neural network modules.

MccMixin

Calculate a soft Matthews correlation coefficient ([mccWiki]) loss for multiclass

BinaryMCCLoss

Base class for all neural network modules.

MulticlassMCCLoss

Base class for all neural network modules.

DirichletMixin

Uses the loss function from [sensoy2018] based on the implementation at [sensoyGithub]

BinaryDirichletLoss

Uses the loss function from [sensoy2018] based on the implementation at [sensoyGithub]

MulticlassDirichletLoss

Uses the loss function from [sensoy2018] based on the implementation at [sensoyGithub]

SIDLoss

Base class for all neural network modules.

WassersteinLoss

Base class for all neural network modules.

Attributes#

LossFunctionRegistry

class chemprop.nn.loss.LossFunction(task_weights=1.0)[source]#

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:

task_weights (numpy.typing.ArrayLike)

forward(preds, targets, mask, weights, lt_mask, gt_mask)[source]#

Calculate the mean loss function value given predicted and target values

Parameters:
  • preds (Tensor) – a tensor of shape b x (t * s) (regression), b x t (binary classification), or b x t x c (multiclass classification) containing the predictions, where b is the batch size, t is the number of tasks to predict, s is the number of targets to predict for each task, and c is the number of classes.

  • targets (Tensor) – a float tensor of shape b x t containing the target values

  • mask (Tensor) – a boolean tensor of shape b x t indicating whether the given prediction should be included in the loss calculation

  • weights (Tensor) – a tensor of shape b or b x 1 containing the per-sample weight

  • lt_mask (Tensor)

  • gt_mask (Tensor)

Returns:

a scalar containing the fully reduced loss

Return type:

Tensor

extra_repr()[source]#

Set the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

Return type:

str

chemprop.nn.loss.LossFunctionRegistry#
class chemprop.nn.loss.MSELoss(task_weights=1.0)[source]#

Bases: LossFunction

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:

task_weights (numpy.typing.ArrayLike)

class chemprop.nn.loss.BoundedMSELoss(task_weights=1.0)[source]#

Bases: MSELoss

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:

task_weights (numpy.typing.ArrayLike)

class chemprop.nn.loss.MVELoss(task_weights=1.0)[source]#

Bases: LossFunction

Calculate the loss using Eq. 9 from [nix1994]

References

[nix1994] (1,2)

Nix, D. A.; Weigend, A. S. “Estimating the mean and variance of the target probability distribution.” Proceedings of 1994 IEEE International Conference on Neural Networks, 1994 https://doi.org/10.1109/icnn.1994.374138

Parameters:

task_weights (numpy.typing.ArrayLike)

class chemprop.nn.loss.EvidentialLoss(task_weights=None, v_kl=0.2, eps=1e-08)[source]#

Bases: LossFunction

Calculate the loss using Eqs. 8, 9, and 10 from [amini2020]

References

[amini2020] (1,2)

Amini, A; Schwarting, W.; Soleimany, A.; Rus, D.; “Deep Evidential Regression” Advances in Neural Information Processing Systems;2020; Vol.33. https://proceedings.neurips.cc/paper_files/paper/2020/file/aab085461de182608ee9f607f3f7d18f-Paper.pdf

[soleimany2021]

Soleimany, A.P.; Amini, A.; Goldman, S.; Rus, D.; Bhatia, S.N.; Coley, C.W.; “Evidential Deep Learning for Guided Molecular Property Prediction and Discovery.” ACS Cent. Sci. 2021, 7, 8, 1356-1367. https://doi.org/10.1021/acscentsci.1c00546

Parameters:
  • task_weights (torch.Tensor | None)

  • v_kl (float)

  • eps (float)

extra_repr()[source]#

Set the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

Return type:

str

class chemprop.nn.loss.BCELoss(task_weights=1.0)[source]#

Bases: LossFunction

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:

task_weights (numpy.typing.ArrayLike)

class chemprop.nn.loss.CrossEntropyLoss(task_weights=1.0)[source]#

Bases: LossFunction

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:

task_weights (numpy.typing.ArrayLike)

class chemprop.nn.loss.MccMixin[source]#

Calculate a soft Matthews correlation coefficient ([mccWiki]) loss for multiclass classification based on the implementataion of [mccSklearn]

References

__call__(preds, targets, mask, weights, *args)[source]#
Parameters:
  • preds (torch.Tensor)

  • targets (torch.Tensor)

  • mask (torch.Tensor)

  • weights (torch.Tensor)

class chemprop.nn.loss.BinaryMCCLoss(task_weights=1.0)[source]#

Bases: LossFunction, MccMixin

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:

task_weights (numpy.typing.ArrayLike)

class chemprop.nn.loss.MulticlassMCCLoss(task_weights=1.0)[source]#

Bases: LossFunction, MccMixin

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:

task_weights (numpy.typing.ArrayLike)

class chemprop.nn.loss.DirichletMixin(task_weights=None, v_kl=0.2)[source]#

Uses the loss function from [sensoy2018] based on the implementation at [sensoyGithub]

References

[sensoy2018]

Sensoy, M.; Kaplan, L.; Kandemir, M. “Evidential deep learning to quantify classification uncertainty.” NeurIPS, 2018, 31. https://doi.org/10.48550/arXiv.1806.01768

Parameters:
  • task_weights (torch.Tensor | None)

  • v_kl (float)

extra_repr()[source]#
Return type:

str

class chemprop.nn.loss.BinaryDirichletLoss(task_weights=None, v_kl=0.2)[source]#

Bases: DirichletMixin, LossFunction

Uses the loss function from [sensoy2018] based on the implementation at [sensoyGithub]

References

[sensoy2018]

Sensoy, M.; Kaplan, L.; Kandemir, M. “Evidential deep learning to quantify classification uncertainty.” NeurIPS, 2018, 31. https://doi.org/10.48550/arXiv.1806.01768

Parameters:
  • task_weights (torch.Tensor | None)

  • v_kl (float)

class chemprop.nn.loss.MulticlassDirichletLoss(task_weights=None, v_kl=0.2)[source]#

Bases: DirichletMixin, LossFunction

Uses the loss function from [sensoy2018] based on the implementation at [sensoyGithub]

References

[sensoy2018]

Sensoy, M.; Kaplan, L.; Kandemir, M. “Evidential deep learning to quantify classification uncertainty.” NeurIPS, 2018, 31. https://doi.org/10.48550/arXiv.1806.01768

Parameters:
  • task_weights (torch.Tensor | None)

  • v_kl (float)

class chemprop.nn.loss.SIDLoss(task_weights=None, threshold=None)[source]#

Bases: LossFunction

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:
  • task_weights (torch.Tensor | None)

  • threshold (float | None)

extra_repr()[source]#

Set the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

Return type:

str

class chemprop.nn.loss.WassersteinLoss(task_weights=None, threshold=None)[source]#

Bases: LossFunction

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:
  • task_weights (torch.Tensor | None)

  • threshold (float | None)

extra_repr()[source]#

Set the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

Return type:

str