Optimizer.lr_scheduler(学习率) 模块¶
ppsci.optimizer.lr_scheduler
¶
Linear
¶
Bases: LRBase
Linear learning rate decay.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
epochs |
int
|
Total epoch(s). |
required |
iters_per_epoch |
int
|
Number of iterations within an epoch. |
required |
learning_rate |
float
|
Learning rate. |
required |
end_lr |
float
|
The minimum final learning rate. Defaults to 0.0. |
0.0
|
power |
float
|
Power of polynomial. Defaults to 1.0. |
1.0
|
cycle |
bool
|
Whether the learning rate rises again. If True, then the learning rate will rise when it decrease
to |
False
|
warmup_epoch |
int
|
Number of warmup epochs. |
0
|
warmup_start_lr |
float
|
Start learning rate within warmup. |
0.0
|
last_epoch |
int
|
Last epoch. |
-1
|
by_epoch |
bool
|
Learning rate decays by epoch when by_epoch is True, else by iter. |
False
|
Examples:
Source code in ppsci/optimizer/lr_scheduler.py
Cosine
¶
Bases: LRBase
Cosine learning rate decay.
lr = 0.05 * (math.cos(epoch * (math.pi / epochs)) + 1)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
epochs |
int
|
Total epoch(s). |
required |
iters_per_epoch |
int
|
Number of iterations within an epoch. |
required |
learning_rate |
float
|
Learning rate. |
required |
eta_min |
float
|
Minimum learning rate. Defaults to 0.0. |
0.0
|
warmup_epoch |
int
|
The epoch numbers for LinearWarmup. Defaults to 0. |
0
|
warmup_start_lr |
float
|
Start learning rate within warmup. Defaults to 0.0. |
0.0
|
last_epoch |
int
|
Last epoch. Defaults to -1. |
-1
|
by_epoch |
bool
|
Learning rate decays by epoch when by_epoch is True, else by iter. Defaults to False. |
False
|
Examples:
Source code in ppsci/optimizer/lr_scheduler.py
Step
¶
Bases: LRBase
Step learning rate decay.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
epochs |
int
|
Total epoch(s). |
required |
iters_per_epoch |
int
|
Number of iterations within an epoch. |
required |
learning_rate |
float
|
Learning rate. |
required |
step_size |
int
|
The interval to update. |
required |
gamma |
float
|
The Ratio that the learning rate will be reduced.
|
required |
warmup_epoch |
int
|
The epoch numbers for LinearWarmup. Defaults to 0. |
0
|
warmup_start_lr |
float
|
Start learning rate within warmup. Defaults to 0.0. |
0.0
|
last_epoch |
int
|
Last epoch. Defaults to -1. |
-1
|
by_epoch |
bool
|
Learning rate decays by epoch when by_epoch is True, else by iter. Defaults to False. |
False
|
Examples:
Source code in ppsci/optimizer/lr_scheduler.py
Piecewise
¶
Bases: LRBase
Piecewise learning rate decay
Parameters:
Name | Type | Description | Default |
---|---|---|---|
epochs |
int
|
Total epoch(s) |
required |
iters_per_epoch |
int
|
Number of iterations within an epoch |
required |
decay_epochs |
Tuple[int, ...]
|
A list of steps numbers. The type of element in the list is python int. |
required |
values |
Tuple[float, ...]
|
Tuple of learning rate values that will be picked during different epoch boundaries. |
required |
warmup_epoch |
int
|
The epoch numbers for LinearWarmup. Defaults to 0. |
0
|
warmup_start_lr |
float
|
Start learning rate within warmup. Defaults to 0.0. |
0.0
|
last_epoch |
int
|
Last epoch. Defaults to -1. |
-1
|
by_epoch |
bool
|
Learning rate decays by epoch when by_epoch is True, else by iter. Defaults to False. |
False
|
Examples:
>>> import ppsci
>>> lr = ppsci.optimizer.lr_scheduler.Piecewise(
... 10, 1, [2, 4], (1e-3, 1e-4, 1e-5)
... )()
Source code in ppsci/optimizer/lr_scheduler.py
MultiStepDecay
¶
Bases: LRBase
MultiStepDecay learning rate decay
Parameters:
Name | Type | Description | Default |
---|---|---|---|
epochs |
int
|
Total epoch(s) |
required |
iters_per_epoch |
int
|
Number of iterations within an epoch |
required |
learning_rate |
float
|
Learning rate |
required |
milestones |
Tuple[int, ...]
|
Tuple of each boundaries. should be increasing. |
required |
gamma |
float
|
The Ratio that the learning rate will be reduced.
|
0.1
|
warmup_epoch |
int
|
The epoch numbers for LinearWarmup. Defaults to 0. |
0
|
warmup_start_lr |
float
|
Start learning rate within warmup. Defaults to 0.0. |
0.0
|
last_epoch |
int
|
Last epoch. Defaults to -1. |
-1
|
by_epoch |
bool
|
Learning rate decays by epoch when by_epoch is True, else by iter. Defaults to False. |
False
|
Examples:
Source code in ppsci/optimizer/lr_scheduler.py
ExponentialDecay
¶
Bases: LRBase
ExponentialDecay learning rate decay.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
epochs |
int
|
Total epoch(s). |
required |
iters_per_epoch |
int
|
Number of iterations within an epoch. |
required |
learning_rate |
float
|
Learning rate. |
required |
gamma |
float
|
The decay rate. |
required |
decay_steps |
int
|
The number of steps to decay. |
required |
warmup_epoch |
int
|
Number of warmup epochs. |
0
|
warmup_start_lr |
float
|
Start learning rate within warmup. |
0.0
|
last_epoch |
int
|
Last epoch. |
-1
|
by_epoch |
bool
|
Learning rate decays by epoch when by_epoch is True, else by iter. |
False
|
Examples:
Source code in ppsci/optimizer/lr_scheduler.py
CosineWarmRestarts
¶
Bases: LRBase
Set the learning rate using a cosine annealing schedule with warm restarts.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
epochs |
int
|
Total epoch(s) |
required |
iters_per_epoch |
int
|
Number of iterations within an epoch |
required |
learning_rate |
float
|
Learning rate |
required |
T_0 |
int
|
Number of iterations for the first restart. |
required |
T_mult |
int
|
A factor increases T_i after a restart |
required |
eta_min |
float
|
Minimum learning rate. Defaults to 0.0. |
0.0
|
warmup_epoch |
int
|
The epoch numbers for LinearWarmup. Defaults to 0. |
0
|
warmup_start_lr |
float
|
Start learning rate within warmup. Defaults to 0.0. |
0.0
|
last_epoch |
int
|
Last epoch. Defaults to -1. |
-1
|
by_epoch |
bool
|
Learning rate decays by epoch when by_epoch is True, else by iter. Defaults to False. |
False
|
Examples:
Source code in ppsci/optimizer/lr_scheduler.py
OneCycleLR
¶
Bases: LRBase
Sets the learning rate according to the one cycle learning rate scheduler. The scheduler adjusts the learning rate from an initial learning rate to the maximum learning rate and then from that maximum learning rate to the minimum learning rate, which is much less than the initial learning rate.
It has been proposed in Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates.
Please note that the default behavior of this scheduler follows the fastai implementation of one cycle,
which claims that "unpublished work has shown even better results by using only two phases".
If you want the behavior of this scheduler to be consistent with the paper, please set three_phase=True
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
epochs |
int
|
Total epoch(s). |
required |
iters_per_epoch |
int
|
Number of iterations within an epoch. |
required |
max_learning_rate |
float
|
The maximum learning rate. It is a python float number. Functionally, it defines the initial learning rate by |
required |
divide_factor |
float
|
Initial learning rate will be determined by initial_learning_rate = max_learning_rate / divide_factor. Defaults to 25.0. |
25.0
|
end_learning_rate |
float
|
The minimum learning rate during training, it should be much less than initial learning rate. Defaults to 0.0001. |
0.0001
|
phase_pct |
float
|
The percentage of total steps which used to increasing learning rate. Defaults to 0.3. |
0.3
|
anneal_strategy |
str
|
Strategy of adjusting learning rate. "cos" for cosine annealing, "linear" for linear annealing. Defaults to "cos". |
'cos'
|
three_phase |
bool
|
Whether to use three phase. Defaults to False. |
False
|
warmup_epoch |
int
|
The epoch numbers for LinearWarmup. Defaults to 0. |
0
|
warmup_start_lr |
float
|
Start learning rate within warmup. Defaults to 0.0. |
0.0
|
last_epoch |
int
|
Last epoch. Defaults to -1. |
-1
|
by_epoch |
bool
|
Learning rate decays by epoch when by_epoch is True, else by iter. Defaults to False. |
False
|
Examples:
Source code in ppsci/optimizer/lr_scheduler.py
659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 |
|