Activation
GLUActivation
Bases: Module
Implements the Gated Linear Unit (GLU) activation function.
The GLU activation splits the input in half across the channel dimension. One half is passed through a nonlinear activation function (like sigmoid or leaky ReLU), and the output from this activation function is used as a gate to control the amplitude of the other half of the input. An element-wise multiplication is then performed between the gating signal and the other half of the input.
The GLU activation allows the model to dynamically choose which inputs to pass through and what information to suppress, which can help improving the model performance on certain tasks.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
slope |
float
|
Controls the slope for the leaky ReLU activation function. Default: 0.3 or see the const |
LEAKY_RELU_SLOPE
|
Shape
- Input: (N, 2*C, L) where C is the number of input channels.
- Output: (N, C, L)
Examples:
m = GLUActivation(0.3)
input = torch.randn(16, 2*20, 44)
output = m(input)
Source code in models/tts/delightful_tts/conv_blocks/activation.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
|
forward(x)
Defines the computation performed at every call.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Tensor
|
The input tensor of shape (batch_size, 2*channels, signal_length) |
required |
Returns:
Name | Type | Description |
---|---|---|
x |
Tensor
|
The output tensor of shape (batch_size, channels, signal_length) |
Source code in models/tts/delightful_tts/conv_blocks/activation.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
|