Acoustic
pitch_phoneme_averaging(durations, pitches, max_phoneme_len)
Function to compute the average pitch values over the duration of each phoneme.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
durations |
Tensor
|
Duration of each phoneme for each sample in a batch. Shape: (batch_size, n_phones) |
required |
pitches |
Tensor
|
Per-frame pitch values for each sample in a batch. Shape: (batch_size, n_mel_timesteps) |
required |
max_phoneme_len |
int
|
Maximum length of the phoneme sequence in a batch. |
required |
Returns:
Name | Type | Description |
---|---|---|
pitches_averaged |
Tensor
|
Tensor containing the averaged pitch values for each phoneme. Shape: (batch_size, max_phoneme_len) |
Source code in models/helpers/acoustic.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
|
positional_encoding(d_model, length)
Function to calculate positional encoding for transformer model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
d_model |
int
|
Dimension of the model (often corresponds to embedding size). |
required |
length |
int
|
Length of sequences. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
torch.Tensor: Tensor having positional encodings. |
Source code in models/helpers/acoustic.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
|