Skip to content

Initializer

get_test_configs(srink_factor=4)

Returns a tuple of configuration objects for testing purposes.

Parameters:

Name Type Description Default
srink_factor int

The shrink factor to apply to the model configuration. Defaults to 4.

4

Returns:

Type Description
Tuple[PreprocessingConfigUnivNet, AcousticENModelConfig, AcousticPretrainingConfig]

Tuple[PreprocessingConfig, AcousticENModelConfig, AcousticPretrainingConfig]: A tuple of configuration objects for testing purposes.

This function returns a tuple of configuration objects for testing purposes. The configuration objects are as follows: - PreprocessingConfig: A configuration object for preprocessing. - AcousticENModelConfig: A configuration object for the acoustic model. - AcousticPretrainingConfig: A configuration object for acoustic pretraining.

The srink_factor parameter is used to shrink the dimensions of the model configuration to prevent out of memory issues during testing.

Source code in models/helpers/initializer.py
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
def get_test_configs(
    srink_factor: int = 4,
) -> Tuple[PreprocessingConfig, AcousticENModelConfig, AcousticPretrainingConfig]:
    r"""Returns a tuple of configuration objects for testing purposes.

    Args:
        srink_factor (int, optional): The shrink factor to apply to the model configuration. Defaults to 4.

    Returns:
        Tuple[PreprocessingConfig, AcousticENModelConfig, AcousticPretrainingConfig]: A tuple of configuration objects for testing purposes.

    This function returns a tuple of configuration objects for testing purposes. The configuration objects are as follows:
    - `PreprocessingConfig`: A configuration object for preprocessing.
    - `AcousticENModelConfig`: A configuration object for the acoustic model.
    - `AcousticPretrainingConfig`: A configuration object for acoustic pretraining.

    The `srink_factor` parameter is used to shrink the dimensions of the model configuration to prevent out of memory issues during testing.
    """
    preprocess_config = PreprocessingConfig("english_only")
    model_config = AcousticENModelConfig()

    model_config.speaker_embed_dim = model_config.speaker_embed_dim // srink_factor
    model_config.encoder.n_hidden = model_config.encoder.n_hidden // srink_factor
    model_config.decoder.n_hidden = model_config.decoder.n_hidden // srink_factor
    model_config.variance_adaptor.n_hidden = (
        model_config.variance_adaptor.n_hidden // srink_factor
    )

    acoustic_pretraining_config = AcousticPretrainingConfig()

    return (preprocess_config, model_config, acoustic_pretraining_config)

init_acoustic_model(preprocess_config, model_config, n_speakers=10)

Function to initialize an AcousticModel with given preprocessing and model configurations.

Parameters:

Name Type Description Default
preprocess_config PreprocessingConfigUnivNet

Configuration object for pre-processing.

required
model_config AcousticENModelConfig

Configuration object for English Acoustic model.

required
n_speakers int

Number of speakers. Defaults to 10.

10

Returns:

Name Type Description
AcousticModel Tuple[AcousticModel, AcousticModelConfig]

Initialized Acoustic Model.

The function creates an AcousticModelConfig instance which is then used to initialize the AcousticModel. The AcousticModelConfig is configured as follows: - preprocess_config: Pre-processing configuration. - model_config: English Acoustic model configuration. - fine_tuning: Boolean flag set to True indicating the model is for fine-tuning. - n_speakers: Number of speakers.

Source code in models/helpers/initializer.py
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
def init_acoustic_model(
    preprocess_config: PreprocessingConfig,
    model_config: AcousticENModelConfig,
    n_speakers: int = 10,
) -> Tuple[AcousticModel, AcousticModelConfig]:
    r"""Function to initialize an `AcousticModel` with given preprocessing and model configurations.

    Args:
        preprocess_config (PreprocessingConfig): Configuration object for pre-processing.
        model_config (AcousticENModelConfig): Configuration object for English Acoustic model.
        n_speakers (int, optional): Number of speakers. Defaults to 10.

    Returns:
        AcousticModel: Initialized Acoustic Model.

    The function creates an `AcousticModelConfig` instance which is then used to initialize the `AcousticModel`.
    The `AcousticModelConfig` is configured as follows:
    - preprocess_config: Pre-processing configuration.
    - model_config: English Acoustic model configuration.
    - fine_tuning: Boolean flag set to True indicating the model is for fine-tuning.
    - n_speakers: Number of speakers.

    """
    # Create an AcousticModelConfig instance
    acoustic_model_config = AcousticModelConfig(
        preprocess_config=preprocess_config,
        model_config=model_config,
        n_speakers=n_speakers,
    )

    model = AcousticModel(**vars(acoustic_model_config))

    return model, acoustic_model_config

init_conformer(model_config)

Function to initialize a Conformer with a given AcousticModelConfigType configuration.

Parameters:

Name Type Description Default
model_config AcousticModelConfigType

The object that holds the configuration details.

required

Returns:

Name Type Description
Conformer Tuple[Conformer, ConformerConfig]

Initialized Conformer object.

The function sets the details of the Conformer object based on the model_config parameter. The Conformer configuration is set as follows: - dim: The number of hidden units, taken from the encoder part of the model_config.encoder.n_hidden. - n_layers: The number of layers, taken from the encoder part of the model_config.encoder.n_layers. - n_heads: The number of attention heads, taken from the encoder part of the model_config.encoder.n_heads. - embedding_dim: The sum of dimensions of speaker embeddings and language embeddings. The speaker_embed_dim and lang_embed_dim are a part of the model_config.speaker_embed_dim. - p_dropout: Dropout rate taken from the encoder part of the model_config.encoder.p_dropout. It adds a regularization parameter to prevent overfitting. - kernel_size_conv_mod: The kernel size for the convolution module taken from the encoder part of the model_config.encoder.kernel_size_conv_mod. - with_ff: A Boolean value denoting if feedforward operation is involved, taken from the encoder part of the model_config.encoder.with_ff.

Source code in models/helpers/initializer.py
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
def init_conformer(
    model_config: AcousticModelConfigType,
) -> Tuple[Conformer, ConformerConfig]:
    r"""Function to initialize a `Conformer` with a given `AcousticModelConfigType` configuration.

    Args:
        model_config (AcousticModelConfigType): The object that holds the configuration details.

    Returns:
        Conformer: Initialized Conformer object.

    The function sets the details of the `Conformer` object based on the `model_config` parameter.
    The `Conformer` configuration is set as follows:
    - dim: The number of hidden units, taken from the encoder part of the `model_config.encoder.n_hidden`.
    - n_layers: The number of layers, taken from the encoder part of the `model_config.encoder.n_layers`.
    - n_heads: The number of attention heads, taken from the encoder part of the `model_config.encoder.n_heads`.
    - embedding_dim: The sum of dimensions of speaker embeddings and language embeddings.
      The speaker_embed_dim and lang_embed_dim are a part of the `model_config.speaker_embed_dim`.
    - p_dropout: Dropout rate taken from the encoder part of the `model_config.encoder.p_dropout`.
      It adds a regularization parameter to prevent overfitting.
    - kernel_size_conv_mod: The kernel size for the convolution module taken from the encoder part of the `model_config.encoder.kernel_size_conv_mod`.
    - with_ff: A Boolean value denoting if feedforward operation is involved, taken from the encoder part of the `model_config.encoder.with_ff`.

    """
    conformer_config = ConformerConfig(
        dim=model_config.encoder.n_hidden,
        n_layers=model_config.encoder.n_layers,
        n_heads=model_config.encoder.n_heads,
        embedding_dim=model_config.speaker_embed_dim
        + model_config.lang_embed_dim,  # speaker_embed_dim + lang_embed_dim = 385
        p_dropout=model_config.encoder.p_dropout,
        kernel_size_conv_mod=model_config.encoder.kernel_size_conv_mod,
        with_ff=model_config.encoder.with_ff,
    )

    model = Conformer(**vars(conformer_config))

    return model, conformer_config

init_forward_trains_params(model_config, acoustic_pretraining_config, preprocess_config, n_speakers=10)

Function to initialize the parameters for forward propagation during training.

Parameters:

Name Type Description Default
model_config AcousticENModelConfig

Configuration object for English Acoustic model.

required
acoustic_pretraining_config AcousticPretrainingConfig

Configuration object for acoustic pretraining.

required
preprocess_config PreprocessingConfigUnivNet

Configuration object for pre-processing.

required
n_speakers int

Number of speakers. Defaults to 10.

10

Returns:

Name Type Description
ForwardTrainParams ForwardTrainParams

Initialized parameters for forward propagation during training.

The function initializes the ForwardTrainParams object with the following parameters: - x: Tensor containing the input sequences. Shape: [speaker_embed_dim, batch_size] - speakers: Tensor containing the speaker indices. Shape: [speaker_embed_dim, batch_size] - src_lens: Tensor containing the lengths of source sequences. Shape: [batch_size] - mels: Tensor containing the mel spectrogram. Shape: [batch_size, speaker_embed_dim, encoder.n_hidden] - mel_lens: Tensor containing the lengths of mel sequences. Shape: [batch_size] - pitches: Tensor containing the pitch values. Shape: [batch_size, speaker_embed_dim, encoder.n_hidden] - energies: Tensor containing the energy values. Shape: [batch_size, speaker_embed_dim, encoder.n_hidden] - langs: Tensor containing the language indices. Shape: [speaker_embed_dim, batch_size] - attn_priors: Tensor containing the attention priors. Shape: [batch_size, speaker_embed_dim, speaker_embed_dim] - use_ground_truth: Boolean flag indicating if ground truth values should be used or not.

All the Tensors are initialized with random values.

Source code in models/helpers/initializer.py
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
def init_forward_trains_params(
    model_config: AcousticENModelConfig,
    acoustic_pretraining_config: AcousticPretrainingConfig,
    preprocess_config: PreprocessingConfig,
    n_speakers: int = 10,
) -> ForwardTrainParams:
    r"""Function to initialize the parameters for forward propagation during training.

    Args:
        model_config (AcousticENModelConfig): Configuration object for English Acoustic model.
        acoustic_pretraining_config (AcousticPretrainingConfig): Configuration object for acoustic pretraining.
        preprocess_config (PreprocessingConfig): Configuration object for pre-processing.
        n_speakers (int, optional): Number of speakers. Defaults to 10.

    Returns:
        ForwardTrainParams: Initialized parameters for forward propagation during training.

    The function initializes the ForwardTrainParams object with the following parameters:
    - x: Tensor containing the input sequences. Shape: [speaker_embed_dim, batch_size]
    - speakers: Tensor containing the speaker indices. Shape: [speaker_embed_dim, batch_size]
    - src_lens: Tensor containing the lengths of source sequences. Shape: [batch_size]
    - mels: Tensor containing the mel spectrogram. Shape: [batch_size, speaker_embed_dim, encoder.n_hidden]
    - mel_lens: Tensor containing the lengths of mel sequences. Shape: [batch_size]
    - pitches: Tensor containing the pitch values. Shape: [batch_size, speaker_embed_dim, encoder.n_hidden]
    - energies: Tensor containing the energy values. Shape: [batch_size, speaker_embed_dim, encoder.n_hidden]
    - langs: Tensor containing the language indices. Shape: [speaker_embed_dim, batch_size]
    - attn_priors: Tensor containing the attention priors. Shape: [batch_size, speaker_embed_dim, speaker_embed_dim]
    - use_ground_truth: Boolean flag indicating if ground truth values should be used or not.

    All the Tensors are initialized with random values.
    """
    return ForwardTrainParams(
        # x: Tensor containing the input sequences. Shape: [speaker_embed_dim, batch_size]
        x=torch.randint(
            1,
            255,
            (
                model_config.speaker_embed_dim,
                acoustic_pretraining_config.batch_size,
            ),
        ),
        pitches_range=(0.0, 1.0),
        # speakers: Tensor containing the speaker indices. Shape: [speaker_embed_dim, batch_size]
        speakers=torch.randint(
            1,
            n_speakers - 1,
            (
                model_config.speaker_embed_dim,
                acoustic_pretraining_config.batch_size,
            ),
        ),
        # src_lens: Tensor containing the lengths of source sequences. Shape: [speaker_embed_dim]
        src_lens=torch.cat(
            [
                # torch.tensor([self.model_config.speaker_embed_dim]),
                torch.randint(
                    1,
                    acoustic_pretraining_config.batch_size + 1,
                    (model_config.speaker_embed_dim,),
                ),
            ],
            dim=0,
        ),
        # mels: Tensor containing the mel spectrogram. Shape: [batch_size, stft.n_mel_channels, encoder.n_hidden]
        mels=torch.randn(
            model_config.speaker_embed_dim,
            preprocess_config.stft.n_mel_channels,
            model_config.encoder.n_hidden,
        ),
        # enc_len: Tensor containing the lengths of mel sequences. Shape: [speaker_embed_dim]
        enc_len=torch.cat(
            [
                torch.randint(
                    1,
                    model_config.speaker_embed_dim,
                    (model_config.speaker_embed_dim - 1,),
                ),
                torch.tensor([model_config.speaker_embed_dim]),
            ],
            dim=0,
        ),
        # mel_lens: Tensor containing the lengths of mel sequences. Shape: [batch_size]
        mel_lens=torch.cat(
            [
                torch.randint(
                    1,
                    model_config.speaker_embed_dim,
                    (model_config.speaker_embed_dim - 1,),
                ),
                torch.tensor([model_config.speaker_embed_dim]),
            ],
            dim=0,
        ),
        # pitches: Tensor containing the pitch values. Shape: [batch_size, speaker_embed_dim, encoder.n_hidden]
        pitches=torch.randn(
            # acoustic_pretraining_config.batch_size,
            model_config.speaker_embed_dim,
            # model_config.speaker_embed_dim,
            model_config.encoder.n_hidden,
        ),
        # energies: Tensor containing the energy values. Shape: [batch_size, speaker_embed_dim, encoder.n_hidden]
        energies=torch.randn(
            model_config.speaker_embed_dim,
            1,
            model_config.encoder.n_hidden,
        ),
        # langs: Tensor containing the language indices. Shape: [speaker_embed_dim, batch_size]
        langs=torch.randint(
            1,
            len(SUPPORTED_LANGUAGES) - 1,
            (
                model_config.speaker_embed_dim,
                acoustic_pretraining_config.batch_size,
            ),
        ),
        # attn_priors: Tensor containing the attention priors. Shape: [batch_size, speaker_embed_dim, speaker_embed_dim]
        attn_priors=torch.randn(
            model_config.speaker_embed_dim,
            model_config.speaker_embed_dim,
            acoustic_pretraining_config.batch_size,
        ),
        use_ground_truth=True,
    )

init_mask_input_embeddings_encoding_attn_mask(acoustic_model, forward_train_params, model_config)

Function to initialize masks for padding positions, input sequences, embeddings, positional encoding and attention masks.

Parameters:

Name Type Description Default
acoustic_model AcousticModel

Initialized Acoustic Model.

required
forward_train_params ForwardTrainParams

Parameters for the forward training process.

required
model_config AcousticENModelConfig

Configuration object for English Acoustic model.

required

Returns:

Type Description
Tuple[Tensor, Tensor, Tensor, Tensor, Tensor]

Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]: A tuple containing the following elements: - src_mask: Tensor containing the masks for padding positions in the source sequences. Shape: [1, batch_size] - x: Tensor containing the input sequences. Shape: [speaker_embed_dim, batch_size, speaker_embed_dim] - embeddings: Tensor containing the embeddings. Shape: [speaker_embed_dim, batch_size, speaker_embed_dim + lang_embed_dim] - encoding: Tensor containing the positional encoding. Shape: [lang_embed_dim, max(forward_train_params.mel_lens), model_config.encoder.n_hidden] - attn_maskЖ Tensor containing the attention masks. Shape: [1, 1, 1, batch_size]

The function starts by generating masks for padding positions in the source and mel sequences. Then, it uses the acoustic model to get the input sequences and embeddings. Finally, it computes the positional encoding.

Source code in models/helpers/initializer.py
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
def init_mask_input_embeddings_encoding_attn_mask(
    acoustic_model: AcousticModel,
    forward_train_params: ForwardTrainParams,
    model_config: AcousticENModelConfig,
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
    r"""Function to initialize masks for padding positions, input sequences, embeddings, positional encoding and attention masks.

    Args:
        acoustic_model (AcousticModel): Initialized Acoustic Model.
        forward_train_params (ForwardTrainParams): Parameters for the forward training process.
        model_config (AcousticENModelConfig): Configuration object for English Acoustic model.

    Returns:
        Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]: A tuple containing the following elements:
            - src_mask: Tensor containing the masks for padding positions in the source sequences. Shape: [1, batch_size]
            - x: Tensor containing the input sequences. Shape: [speaker_embed_dim, batch_size, speaker_embed_dim]
            - embeddings: Tensor containing the embeddings. Shape: [speaker_embed_dim, batch_size, speaker_embed_dim + lang_embed_dim]
            - encoding: Tensor containing the positional encoding. Shape: [lang_embed_dim, max(forward_train_params.mel_lens), model_config.encoder.n_hidden]
            - attn_maskЖ Tensor containing the attention masks. Shape: [1, 1, 1, batch_size]

    The function starts by generating masks for padding positions in the source and mel sequences.
    Then, it uses the acoustic model to get the input sequences and embeddings.
    Finally, it computes the positional encoding.

    """
    # Generate masks for padding positions in the source sequences and mel sequences
    # src_mask: Tensor containing the masks for padding positions in the source sequences. Shape: [1, batch_size]
    src_mask = tools.get_mask_from_lengths(forward_train_params.src_lens)

    # x: Tensor containing the input sequences. Shape: [speaker_embed_dim, batch_size, speaker_embed_dim]
    # embeddings: Tensor containing the embeddings. Shape: [speaker_embed_dim, batch_size, speaker_embed_dim + lang_embed_dim]
    x, embeddings = acoustic_model.get_embeddings(
        token_idx=forward_train_params.x,
        speaker_idx=forward_train_params.speakers,
        src_mask=src_mask,
        lang_idx=forward_train_params.langs,
    )

    # encoding: Tensor containing the positional encoding
    # Shape: [lang_embed_dim, max(forward_train_params.mel_lens), encoder.n_hidden]
    encoding = positional_encoding(
        model_config.encoder.n_hidden,
        max(x.shape[1], int(forward_train_params.mel_lens.max().item())),
    )

    attn_mask = src_mask.view((src_mask.shape[0], 1, 1, src_mask.shape[1]))

    return src_mask, x, embeddings, encoding, attn_mask