Scheduled Optim Finetuning
ScheduledOptimFinetuning
Bases: Optimizer
DEPRECATED: moved to AcousticModule.
A custom optimizer that uses AdamW
for optimization and an ExponentialLR
for learning rate scheduling.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
train_config |
AcousticTrainingConfig
|
Training configuration with optimizer and scheduler parameters. |
required |
parameters |
Iterable
|
Iterable of parameters to optimize. |
required |
defaults |
Dict[str, Any]
|
Default optimization options. Defaults to an empty dictionary. |
{}
|
step |
Optional[int]
|
The current training step. Defaults to None. |
None
|
Source code in notebooks/experiments/optimizer/scheduled_optim_finetuning.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
|
get_lr()
Returns the current learning rate.
Source code in notebooks/experiments/optimizer/scheduled_optim_finetuning.py
56 57 58 |
|
load_state_dict(state_dict)
Loads the optimizer state.
Args: state_dict (Dict[str, Any]): A dictionary containing a whole state of the optimizer.
Source code in notebooks/experiments/optimizer/scheduled_optim_finetuning.py
60 61 62 63 64 65 66 |
|
step(closure)
Performs a single optimization step.
Source code in notebooks/experiments/optimizer/scheduled_optim_finetuning.py
45 46 47 48 |
|
zero_grad()
Clears the gradients of all optimized parameters. This should be called before the backward pass in PyTorch.
Source code in notebooks/experiments/optimizer/scheduled_optim_finetuning.py
50 51 52 53 54 |
|