Shortcuts

RampUpEMA

class mmedit.models.base_models.RampUpEMA(model: torch.nn.modules.module.Module, interval: int = 1, ema_kimg: int = 10, ema_rampup: float = 0.05, batch_size: int = 32, eps: float = 1e-08, start_iter: int = 0, device: Optional[torch.device] = None, update_buffers: bool = False)[source]

Implements the exponential moving average with ramping up momentum.

Ref: https://github.com/NVlabs/stylegan3/blob/master/training/training_loop.py # noqa

Parameters
  • model (nn.Module) – The model to be averaged.

  • interval (int) – Interval between two updates. Defaults to 1.

  • ema_kimg (int, optional) – EMA kimgs. Defaults to 10.

  • ema_rampup (float, optional) – Ramp up rate. Defaults to 0.05.

  • batch_size (int, optional) – Global batch size. Defaults to 32.

  • eps (float, optional) – Ramp up epsilon. Defaults to 1e-8.

  • start_iter (int, optional) – EMA start iter. Defaults to 0.

  • device (torch.device, optional) – If provided, the averaged model will be stored on the device. Defaults to None.

  • update_buffers (bool) – if True, it will compute running averages for both the parameters and the buffers of the model. Defaults to False.

avg_func(averaged_param: torch.Tensor, source_param: torch.Tensor, steps: int) None[source]

Compute the moving average of the parameters using exponential moving average.

Parameters
  • averaged_param (Tensor) – The averaged parameters.

  • source_param (Tensor) – The source parameters.

  • steps (int) – The number of times the parameters have been updated.

static rampup(steps, ema_kimg=10, ema_rampup=0.05, batch_size=4, eps=1e-08)[source]

Ramp up ema momentum.

Ref: https://github.com/NVlabs/stylegan3/blob/a5a69f58294509598714d1e88c9646c3d7c6ec94/training/training_loop.py#L300-L308 # noqa

Parameters
  • steps

  • ema_kimg (int, optional) – Half-life of the exponential moving average of generator weights. Defaults to 10.

  • ema_rampup (float, optional) – EMA ramp-up coefficient.If set to None, then rampup will be disabled. Defaults to 0.05.

  • batch_size (int, optional) – Total batch size for one training iteration. Defaults to 4.

  • eps (float, optional) – Epsiolon to avoid batch_size divided by zero. Defaults to 1e-8.

Returns

Updated momentum.

Return type

dict

sync_buffers(model: torch.nn.modules.module.Module) None[source]

Copy buffer from model to averaged model.

Parameters

model (nn.Module) – The model whose parameters will be averaged.

sync_parameters(model: torch.nn.modules.module.Module) None[source]

Copy buffer and parameters from model to averaged model.

Parameters

model (nn.Module) – The model whose parameters will be averaged.

Read the Docs v: zyh/doc-notfound-extend
Versions
master
latest
stable
zyh-doc-notfound-extend
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.