Shortcuts

mmedit.models.editors.stylegan3.stylegan3_ops.ops.bias_act

Custom PyTorch ops for efficient bias and activation.

Module Contents

Classes

EasyDict

Convenience class that behaves like a dict but allows access with the

Functions

_init()

bias_act(x[, b, dim, act, alpha, gain, clamp, impl])

Fused bias and activation function.

_bias_act_ref(x[, b, dim, act, alpha, gain, clamp])

Slow reference implementation of bias_act() using standard TensorFlow

_bias_act_cuda([dim, act, alpha, gain, clamp])

Fast CUDA implementation of bias_act() using custom ops.

Attributes

activation_funcs

_plugin

_null_tensor

_bias_act_cuda_cache

class mmedit.models.editors.stylegan3.stylegan3_ops.ops.bias_act.EasyDict[source]

Bases: dict

Convenience class that behaves like a dict but allows access with the attribute syntax.

__getattr__(name: str) Any[source]
__setattr__(name: str, value: Any) None[source]

Implement setattr(self, name, value).

__delattr__(name: str) None[source]

Implement delattr(self, name).

mmedit.models.editors.stylegan3.stylegan3_ops.ops.bias_act.activation_funcs[source]
mmedit.models.editors.stylegan3.stylegan3_ops.ops.bias_act._plugin[source]
mmedit.models.editors.stylegan3.stylegan3_ops.ops.bias_act._null_tensor[source]
mmedit.models.editors.stylegan3.stylegan3_ops.ops.bias_act._init()[source]
mmedit.models.editors.stylegan3.stylegan3_ops.ops.bias_act.bias_act(x, b=None, dim=1, act='linear', alpha=None, gain=None, clamp=None, impl='cuda')[source]

Fused bias and activation function. Adds bias b to activation tensor x, evaluates activation function act, and scales the result by gain. Each of the steps is optional. In most cases, the fused op is considerably more efficient than performing the same calculation using standard PyTorch ops. It supports first and second order gradients, but not third order gradients.

Parameters
  • x – Input activation tensor. Can be of any shape.

  • b – Bias vector, or None to disable. Must be a 1D tensor of the same type as x. The shape must be known, and it must match the dimension of x corresponding to dim.

  • dim – The dimension in x corresponding to the elements of b. The value of dim is ignored if b is not specified.

  • act – Name of the activation function to evaluate, or “linear” to disable. Can be e.g. “relu”, “lrelu”, “tanh”, “sigmoid”, “swish”, etc. See activation_funcs for a full list. None is not allowed.

  • alpha – Shape parameter for the activation function, or None to use the default.

  • gain – Scaling factor for the output tensor, or None to use default. See activation_funcs for the default scaling of each activation function. If unsure, consider specifying 1.

  • clamp – Clamp the output values to [-clamp, +clamp], or None to disable the clamping (default).

  • impl – Name of the implementation to use. Can be “ref” or “cuda” (default).

Returns

Tensor of the same shape and datatype as x.

mmedit.models.editors.stylegan3.stylegan3_ops.ops.bias_act._bias_act_ref(x, b=None, dim=1, act='linear', alpha=None, gain=None, clamp=None)[source]

Slow reference implementation of bias_act() using standard TensorFlow ops.

mmedit.models.editors.stylegan3.stylegan3_ops.ops.bias_act._bias_act_cuda_cache[source]
mmedit.models.editors.stylegan3.stylegan3_ops.ops.bias_act._bias_act_cuda(dim=1, act='linear', alpha=None, gain=None, clamp=None)[source]

Fast CUDA implementation of bias_act() using custom ops.

Read the Docs v: latest
Versions
master
latest
stable
zyh-re-docs
zyh-doc-notfound-extend
zyh-api-rendering
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.