chemprop.nn.ffn
#
Module Contents#
Classes#
A |
|
An |
- class chemprop.nn.ffn.FFN(*args, **kwargs)[source]#
Bases:
torch.nn.Module
A
FFN
is a differentiable function \(f_\theta : \mathbb R^i \mapsto \mathbb R^o\)- input_dim: int#
- output_dim: int#
- class chemprop.nn.ffn.MLP(*args: torch.nn.modules.module.Module)[source]#
- class chemprop.nn.ffn.MLP(arg)
Bases:
torch.nn.Sequential
,FFN
An
MLP
is an FFN that implements the following function:\[\begin{split}\mathbf h_0 &= \mathbf W_0 \mathbf x \,+ \mathbf b_{0} \\ \mathbf h_l &= \mathbf W_l \left( \mathtt{dropout} \left( \sigma ( \,\mathbf h_{l-1}\, ) \right) \right) + \mathbf b_l\\\end{split}\]where \(\mathbf x\) is the input tensor, \(\mathbf W_l\) and \(\mathbf b_l\) are the learned weight matrix and bias, respectively, of the \(l\)-th layer, \(\mathbf h_l\) is the hidden representation after layer \(l\), and \(\sigma\) is the activation function.
- property input_dim: int#
- Return type:
int
- property output_dim: int#
- Return type:
int