Shortcuts

mmedit.models.editors.pggan.pggan_modules

Module Contents

Classes

EqualizedLR

Equalized Learning Rate.

PixelNorm

Pixel Normalization.

EqualizedLRConvModule

Equalized LR ConvModule.

EqualizedLRConvUpModule

Equalized LR (Upsample + Conv) Module.

EqualizedLRConvDownModule

Equalized LR (Conv + Downsample) Module.

EqualizedLRLinearModule

Equalized LR LinearModule.

PGGANNoiseTo2DFeat

Base class for all neural network modules.

PGGANDecisionHead

Base class for all neural network modules.

MiniBatchStddevLayer

Minibatch standard deviation.

Functions

equalized_lr(module[, name, gain, mode, lr_mul])

Equalized Learning Rate.

pixel_norm(x[, eps])

Pixel Normalization.

class mmedit.models.editors.pggan.pggan_modules.EqualizedLR(name='weight', gain=2 ** 0.5, mode='fan_in', lr_mul=1.0)[source]

Equalized Learning Rate.

This trick is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and Variation

The general idea is to dynamically rescale the weight in training instead of in initializing so that the variance of the responses in each layer is guaranteed with some statistical properties.

Note that this function is always combined with a convolution module which is initialized with \(\mathcal{N}(0, 1)\).

Parameters
  • name (str | optional) – The name of weights. Defaults to ‘weight’.

  • mode (str, optional) – The mode of computing fan which is the same as kaiming_init in pytorch. You can choose one from [‘fan_in’, ‘fan_out’]. Defaults to ‘fan_in’.

compute_weight(module)[source]

Compute weight with equalized learning rate.

Parameters

module (nn.Module) – A module that is wrapped with equalized lr.

Returns

Updated weight.

Return type

torch.Tensor

__call__(module, inputs)[source]

Standard interface for forward pre hooks.

static apply(module, name, gain=2 ** 0.5, mode='fan_in', lr_mul=1.0)[source]

Apply function.

This function is to register an equalized learning rate hook in an nn.Module.

Parameters
  • module (nn.Module) – Module to be wrapped.

  • name (str | optional) – The name of weights. Defaults to ‘weight’.

  • mode (str, optional) – The mode of computing fan which is the same as kaiming_init in pytorch. You can choose one from [‘fan_in’, ‘fan_out’]. Defaults to ‘fan_in’.

Returns

Module that is registered with equalized lr hook.

Return type

nn.Module

mmedit.models.editors.pggan.pggan_modules.equalized_lr(module, name='weight', gain=2 ** 0.5, mode='fan_in', lr_mul=1.0)[source]

Equalized Learning Rate.

This trick is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and Variation

The general idea is to dynamically rescale the weight in training instead of in initializing so that the variance of the responses in each layer is guaranteed with some statistical properties.

Note that this function is always combined with a convolution module which is initialized with \(\mathcal{N}(0, 1)\).

Parameters
  • module (nn.Module) – Module to be wrapped.

  • name (str | optional) – The name of weights. Defaults to ‘weight’.

  • mode (str, optional) – The mode of computing fan which is the same as kaiming_init in pytorch. You can choose one from [‘fan_in’, ‘fan_out’]. Defaults to ‘fan_in’.

Returns

Module that is registered with equalized lr hook.

Return type

nn.Module

mmedit.models.editors.pggan.pggan_modules.pixel_norm(x, eps=1e-06)[source]

Pixel Normalization.

This normalization is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and Variation

Parameters
  • x (torch.Tensor) – Tensor to be normalized.

  • eps (float, optional) – Epsilon to avoid dividing zero. Defaults to 1e-6.

Returns

Normalized tensor.

Return type

torch.Tensor

class mmedit.models.editors.pggan.pggan_modules.PixelNorm(in_channels=None, eps=1e-06)[source]

Bases: torch.nn.Module

Pixel Normalization.

This module is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and Variation

Parameters

eps (float, optional) – Epsilon value. Defaults to 1e-6.

_abbr_ = pn[source]
forward(x)[source]

Forward function.

Parameters

x (torch.Tensor) – Tensor to be normalized.

Returns

Normalized tensor.

Return type

torch.Tensor

class mmedit.models.editors.pggan.pggan_modules.EqualizedLRConvModule(*args, equalized_lr_cfg=dict(mode='fan_in'), **kwargs)[source]

Bases: mmcv.cnn.bricks.ConvModule

Equalized LR ConvModule.

In this module, we inherit default mmcv.cnn.ConvModule and adopt equalized lr in convolution. The equalized learning rate is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and Variation

Note that, the initialization of self.conv will be overwritten as \(\mathcal{N}(0, 1)\).

Parameters

equalized_lr_cfg (dict | None, optional) – Config for EqualizedLR. If None, equalized learning rate is ignored. Defaults to dict(mode=’fan_in’).

_init_conv_weights()[source]

Initialize conv weights as described in PGGAN.

class mmedit.models.editors.pggan.pggan_modules.EqualizedLRConvUpModule(*args, upsample=dict(type='nearest', scale_factor=2), **kwargs)[source]

Bases: EqualizedLRConvModule

Equalized LR (Upsample + Conv) Module.

In this module, we inherit EqualizedLRConvModule and adopt upsampling before convolution. As for upsampling, in addition to the sampling layer in MMCV, we also offer the “fused_nn” type. “fused_nn” denotes fusing upsampling and convolution. The fusion is modified from the official Tensorflow implementation in: https://github.com/tkarras/progressive_growing_of_gans/blob/master/networks.py#L86

Parameters
  • upsample (dict | None, optional) – Config for upsampling operation. If

  • None

  • as (you should set it) –

  • Tensorflow (the official PGGAN in) –

  • as

  • ``dict

  • ``dict

forward(x, **kwargs)[source]

Forward function.

Parameters

x (Tensor) – Input tensor with shape (n, c, h, w).

Returns

Forward results.

Return type

Tensor

static fused_nn_hook(module, inputs)[source]

Standard interface for forward pre hooks.

class mmedit.models.editors.pggan.pggan_modules.EqualizedLRConvDownModule(*args, downsample=dict(type='fused_pool'), **kwargs)[source]

Bases: EqualizedLRConvModule

Equalized LR (Conv + Downsample) Module.

In this module, we inherit EqualizedLRConvModule and adopt downsampling after convolution. As for downsampling, we provide two modes of “avgpool” and “fused_pool”. “avgpool” denotes the commonly used average pooling operation, while “fused_pool” represents fusing downsampling and convolution. The fusion is modified from the official Tensorflow implementation in: https://github.com/tkarras/progressive_growing_of_gans/blob/master/networks.py#L109

Parameters

downsample (dict | None, optional) – Config for downsampling operation. If None, downsampling is ignored. Currently, we support the types of [“avgpool”, “fused_pool”]. Defaults to dict(type=’fused_pool’).

forward(x, **kwargs)[source]

Forward function.

Parameters

x (Tensor) – Input tensor with shape (n, c, h, w).

Returns

Normalized tensor.

Return type

torch.Tensor

static fused_avgpool_hook(module, inputs)[source]

Standard interface for forward pre hooks.

class mmedit.models.editors.pggan.pggan_modules.EqualizedLRLinearModule(*args, equalized_lr_cfg=dict(mode='fan_in'), **kwargs)[source]

Bases: torch.nn.Linear

Equalized LR LinearModule.

In this module, we adopt equalized lr in nn.Linear. The equalized learning rate is proposed in: Progressive Growing of GANs for Improved Quality, Stability, and Variation

Note that, the initialization of self.weight will be overwritten as \(\mathcal{N}(0, 1)\).

Parameters

equalized_lr_cfg (dict | None, optional) – Config for EqualizedLR. If None, equalized learning rate is ignored. Defaults to dict(mode=’fan_in’).

_init_linear_weights()[source]

Initialize linear weights as described in PGGAN.

class mmedit.models.editors.pggan.pggan_modules.PGGANNoiseTo2DFeat(noise_size, out_channels, act_cfg=dict(type='LeakyReLU', negative_slope=0.2), norm_cfg=dict(type='PixelNorm'), normalize_latent=True, order=('linear', 'act', 'norm'))[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(x)[source]

Forward function.

Parameters

x (Tensor) – Input noise tensor with shape (n, c).

Returns

Forward results with shape (n, c, 4, 4).

Return type

Tensor

class mmedit.models.editors.pggan.pggan_modules.PGGANDecisionHead(in_channels, mid_channels, out_channels, bias=True, equalized_lr_cfg=dict(gain=1), act_cfg=dict(type='LeakyReLU', negative_slope=0.2), out_act=None)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(x)[source]

Forward function.

Parameters

x (Tensor) – Input tensor with shape (n, c, h, w).

Returns

Forward results.

Return type

Tensor

class mmedit.models.editors.pggan.pggan_modules.MiniBatchStddevLayer(group_size=4, eps=1e-08, gather_all_batch=False)[source]

Bases: torch.nn.Module

Minibatch standard deviation.

Parameters
  • group_size (int, optional) – The size of groups in batch dimension. Defaults to 4.

  • eps (float, optional) – Epsilon value to avoid computation error. Defaults to 1e-8.

  • gather_all_batch (bool, optional) – Whether gather batch from all GPUs. Defaults to False.

forward(x)[source]

Forward function.

Parameters

x (Tensor) – Input tensor with shape (n, c, h, w).

Returns

Forward results.

Return type

Tensor

Read the Docs v: latest
Versions
master
latest
stable
zyh-re-docs
zyh-doc-notfound-extend
zyh-api-rendering
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.