chemprop.nn.ffn#

Module Contents#

Classes#

FFN

A FFN is a differentiable function

MLP

An MLP is an FFN that implements the following function:

class chemprop.nn.ffn.FFN(*args, **kwargs)[source]#

Bases: torch.nn.Module

A FFN is a differentiable function \(f_\theta : \mathbb R^i \mapsto \mathbb R^o\)

input_dim: int#
output_dim: int#
abstract forward(X)[source]#
Parameters:

X (torch.Tensor)

Return type:

torch.Tensor

class chemprop.nn.ffn.MLP(*args: torch.nn.modules.module.Module)[source]#
class chemprop.nn.ffn.MLP(arg)

Bases: torch.nn.Sequential, FFN

An MLP is an FFN that implements the following function:

\[\begin{split}\mathbf h_0 &= \mathbf W_0 \mathbf x \,+ \mathbf b_{0} \\ \mathbf h_l &= \mathbf W_l \left( \mathtt{dropout} \left( \sigma ( \,\mathbf h_{l-1}\, ) \right) \right) + \mathbf b_l\\\end{split}\]

where \(\mathbf x\) is the input tensor, \(\mathbf W_l\) and \(\mathbf b_l\) are the learned weight matrix and bias, respectively, of the \(l\)-th layer, \(\mathbf h_l\) is the hidden representation after layer \(l\), and \(\sigma\) is the activation function.

property input_dim: int#
Return type:

int

property output_dim: int#
Return type:

int

classmethod build(input_dim, output_dim, hidden_dim=300, n_layers=1, dropout=0.0, activation='relu')[source]#
Parameters:
  • input_dim (int)

  • output_dim (int)

  • hidden_dim (int)

  • n_layers (int)

  • dropout (float)

  • activation (str)