Shortcuts

mmedit.models.editors.stylegan1.stylegan1_modules

Module Contents

Classes

EqualLinearActModule

Equalized LR Linear Module with Activation Layer.

NoiseInjection

Noise Injection Module.

ConstantInput

Constant Input.

Blur

Blur module.

AdaptiveInstanceNorm

Adaptive Instance Normalization Module.

StyleConv

Base class for all neural network modules.

Functions

make_kernel(k)

class mmedit.models.editors.stylegan1.stylegan1_modules.EqualLinearActModule(*args, equalized_lr_cfg=dict(gain=1.0, lr_mul=1.0), bias=True, bias_init=0.0, act_cfg=None, **kwargs)[source]

Bases: torch.nn.Module

Equalized LR Linear Module with Activation Layer.

This module is modified from EqualizedLRLinearModule defined in PGGAN. The major features updated in this module is adding support for activation layers used in StyleGAN2.

Parameters
  • equalized_lr_cfg (dict | None, optional) – Config for equalized lr. Defaults to dict(gain=1., lr_mul=1.).

  • bias (bool, optional) – Whether to use bias item. Defaults to True.

  • bias_init (float, optional) – The value for bias initialization. Defaults to 0..

  • act_cfg (dict | None, optional) – Config for activation layer. Defaults to None.

forward(x)[source]

Forward function.

Parameters

x (Tensor) – Input feature map with shape of (N, C, …).

Returns

Output feature map.

Return type

Tensor

class mmedit.models.editors.stylegan1.stylegan1_modules.NoiseInjection(noise_weight_init=0.0)[source]

Bases: torch.nn.Module

Noise Injection Module.

In StyleGAN2, they adopt this module to inject spatial random noise map in the generators.

Parameters

noise_weight_init (float, optional) – Initialization weight for noise injection. Defaults to 0..

forward(image, noise=None, return_noise=False)[source]

Forward Function.

Parameters
  • image (Tensor) – Spatial features with a shape of (N, C, H, W).

  • noise (Tensor, optional) – Noises from the outside. Defaults to None.

  • return_noise (bool, optional) – Whether to return noise tensor. Defaults to False.

Returns

Output features.

Return type

Tensor

class mmedit.models.editors.stylegan1.stylegan1_modules.ConstantInput(channel, size=4)[source]

Bases: torch.nn.Module

Constant Input.

In StyleGAN2, they substitute the original head noise input with such a constant input module.

Parameters
  • channel (int) – Channels for the constant input tensor.

  • size (int, optional) – Spatial size for the constant input. Defaults to 4.

forward(x)[source]

Forward function.

Parameters

x (Tensor) – Input feature map with shape of (N, C, …).

Returns

Output feature map.

Return type

Tensor

mmedit.models.editors.stylegan1.stylegan1_modules.make_kernel(k)[source]
class mmedit.models.editors.stylegan1.stylegan1_modules.Blur(kernel, pad, upsample_factor=1)[source]

Bases: torch.nn.Module

Blur module.

This module is adopted rightly after upsampling operation in StyleGAN2.

Parameters
  • kernel (Array) – Blur kernel/filter used in UpFIRDn.

  • pad (list[int]) – Padding for features.

  • upsample_factor (int, optional) – Upsampling factor. Defaults to 1.

forward(x)[source]

Forward function.

Parameters

x (Tensor) – Input feature map with shape of (N, C, H, W).

Returns

Output feature map.

Return type

Tensor

class mmedit.models.editors.stylegan1.stylegan1_modules.AdaptiveInstanceNorm(in_channel, style_dim)[source]

Bases: torch.nn.Module

Adaptive Instance Normalization Module.

Ref: https://github.com/rosinality/style-based-gan-pytorch/blob/master/model.py # noqa

Parameters
  • in_channel (int) – The number of input’s channel.

  • style_dim (int) – Style latent dimension.

forward(input, style)[source]

Forward function.

Parameters
  • input (Tensor) – Input tensor with shape (n, c, h, w).

  • style (Tensor) – Input style tensor with shape (n, c).

Returns

Forward results.

Return type

Tensor

class mmedit.models.editors.stylegan1.stylegan1_modules.StyleConv(in_channels, out_channels, kernel_size, style_channels, padding=1, initial=False, blur_kernel=[1, 2, 1], upsample=False, fused=False)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(x, style1, style2, noise1=None, noise2=None, return_noise=False)[source]

Forward function.

Parameters
  • x (Tensor) – Input tensor.

  • style1 (Tensor) – Input style tensor with shape (n, c).

  • style2 (Tensor) – Input style tensor with shape (n, c).

  • noise1 (Tensor, optional) – Noise tensor with shape (n, c, h, w). Defaults to None.

  • noise2 (Tensor, optional) – Noise tensor with shape (n, c, h, w). Defaults to None.

  • return_noise (bool, optional) – If True, noise1 and noise2

  • False. (will be returned with out. Defaults to) –

Returns

Forward results.

Return type

Tensor | tuple[Tensor]

Read the Docs v: latest
Versions
master
latest
stable
zyh-re-docs
zyh-doc-notfound-extend
zyh-api-rendering
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.