Shortcuts

mmedit.models.editors.disco_diffusion

Package Contents

Classes

ClipWrapper

Clip Models wrapper for disco-diffusion.

DiscoDiffusion

Disco Diffusion (DD) is a Google Colab Notebook which leverages an AI

ImageTextGuider

Disco-Diffusion uses text and images to guide image generation. We will

SecondaryDiffusionImageNet2

A smaller secondary diffusion model trained by Katherine Crowson to

Functions

alpha_sigma_to_t(alpha, sigma)

convert alpha&sigma to timestep.

class mmedit.models.editors.disco_diffusion.ClipWrapper(clip_type, *args, **kwargs)[源代码]

Bases: torch.nn.Module

Clip Models wrapper for disco-diffusion.

We provide wrappers for the clip models of openai and mlfoundations, where the user can specify clip_type as clip or open_clip, and then initialize a clip model using the same arguments as in the original codebase. The following clip models settings are provided in the official repo of disco diffusion:

Setting | Source | Arguments | # noqa

|:-----------------------------:|———–|--------------------------------------------------------------| # noqa | ViTB32 | clip | name=’ViT-B/32’, jit=False | # noqa | ViTB16 | clip | name=’ViT-B/16’, jit=False | # noqa | ViTL14 | clip | name=’ViT-L/14’, jit=False | # noqa | ViTL14_336px | clip | name=’ViT-L/14@336px’, jit=False | # noqa | RN50 | clip | name=’RN50’, jit=False | # noqa | RN50x4 | clip | name=’RN50x4’, jit=False | # noqa | RN50x16 | clip | name=’RN50x16’, jit=False | # noqa | RN50x64 | clip | name=’RN50x64’, jit=False | # noqa | RN101 | clip | name=’RN101’, jit=False | # noqa | ViTB32_laion2b_e16 | open_clip | name=’ViT-B-32’, pretrained=’laion2b_e16’ | # noqa | ViTB32_laion400m_e31 | open_clip | model_name=’ViT-B-32’, pretrained=’laion400m_e31’ | # noqa | ViTB32_laion400m_32 | open_clip | model_name=’ViT-B-32’, pretrained=’laion400m_e32’ | # noqa | ViTB32quickgelu_laion400m_e31 | open_clip | model_name=’ViT-B-32-quickgelu’, pretrained=’laion400m_e31’ | # noqa | ViTB32quickgelu_laion400m_e32 | open_clip | model_name=’ViT-B-32-quickgelu’, pretrained=’laion400m_e32’ | # noqa | ViTB16_laion400m_e31 | open_clip | model_name=’ViT-B-16’, pretrained=’laion400m_e31’ | # noqa | ViTB16_laion400m_e32 | open_clip | model_name=’ViT-B-16’, pretrained=’laion400m_e32’ | # noqa | RN50_yffcc15m | open_clip | model_name=’RN50’, pretrained=’yfcc15m’ | # noqa | RN50_cc12m | open_clip | model_name=’RN50’, pretrained=’cc12m’ | # noqa | RN50_quickgelu_yfcc15m | open_clip | model_name=’RN50-quickgelu’, pretrained=’yfcc15m’ | # noqa | RN50_quickgelu_cc12m | open_clip | model_name=’RN50-quickgelu’, pretrained=’cc12m’ | # noqa | RN101_yfcc15m | open_clip | model_name=’RN101’, pretrained=’yfcc15m’ | # noqa | RN101_quickgelu_yfcc15m | open_clip | model_name=’RN101-quickgelu’, pretrained=’yfcc15m’ | # noqa

An example of a clip_modes_cfg is as follows: .. code-block:: python

clip_models = [

dict(type=’ClipWrapper’, clip_type=’clip’, name=’ViT-B/32’, jit=False), dict(type=’ClipWrapper’, clip_type=’clip’, name=’ViT-B/16’, jit=False), dict(type=’ClipWrapper’, clip_type=’clip’, name=’RN50’, jit=False)

]

参数

clip_type (List[Dict]) – The original source of the clip model. Whether be clip or open_clip.

forward(*args, **kwargs)

Forward function.

class mmedit.models.editors.disco_diffusion.DiscoDiffusion(unet, diffusion_scheduler, secondary_model=None, clip_models=[], use_fp16=False, pretrained_cfgs=None)[源代码]

Bases: torch.nn.Module

Disco Diffusion (DD) is a Google Colab Notebook which leverages an AI Image generating technique called CLIP-Guided Diffusion to allow you to create compelling and beautiful images from just text inputs. Created by Somnai, augmented by Gandamu, and building on the work of RiversHaveWings, nshepperd, and many others.

Ref:

Github Repo: https://github.com/alembics/disco-diffusion Colab: https://colab.research.google.com/github/alembics/disco-diffusion/blob/main/Disco_Diffusion.ipynb # noqa

参数
  • unet (ModelType) – Config of denoising Unet.

  • diffusion_scheduler (ModelType) – Config of diffusion_scheduler scheduler.

  • secondary_model (ModelType) – A smaller secondary diffusion model trained by Katherine Crowson to remove noise from intermediate timesteps to prepare them for CLIP. Ref: https://twitter.com/rivershavewings/status/1462859669454536711 # noqa Defaults to None.

  • clip_models (list) – Config of clip models. Defaults to [].

  • use_fp16 (bool) – Whether to use fp16 for unet model. Defaults to False.

  • pretrained_cfgs (dict) – Path Config for pretrained weights. Usually this is a dict contains module name and the corresponding ckpt path. Defaults to None.

property device

Get current device of the model.

返回

The current device of the model.

返回类型

torch.device

load_pretrained_models(pretrained_cfgs)

Loading pretrained weights to model. pretrained_cfgs is a dict consist of module name as key and checkpoint path as value.

参数
  • pretrained_cfgs (dict) – Path Config for pretrained weights.

  • the (Usually this is a dict contains module name and) –

  • None. (corresponding ckpt path. Defaults to) –

infer(scheduler_kwargs=None, height=None, width=None, init_image=None, batch_size=1, num_inference_steps=1000, skip_steps=0, show_progress=False, text_prompts=[], image_prompts=[], eta=0.8, clip_guidance_scale=5000, init_scale=1000, tv_scale=0.0, sat_scale=0.0, range_scale=150, cut_overview=[12] * 400 + [4] * 600, cut_innercut=[4] * 400 + [12] * 600, cut_ic_pow=[1] * 1000, cut_icgray_p=[0.2] * 400 + [0] * 600, cutn_batches=4, seed=None)

Inference API for disco diffusion.

参数
  • scheduler_kwargs (dict) – Args for infer time diffusion scheduler. Defaults to None.

  • height (int) – Height of output image. Defaults to None.

  • width (int) – Width of output image. Defaults to None.

  • init_image (str) – Initial image at the start point of denoising. Defaults to None.

  • batch_size (int) – Batch size. Defaults to 1.

  • num_inference_steps (int) – Number of inference steps. Defaults to 1000.

  • skip_steps (int) – Denoising steps to skip, usually set with init_image. Defaults to 0.

  • show_progress (bool) – Whether to show progress. Defaults to False.

  • text_prompts (list) – Text prompts. Defaults to [].

  • image_prompts (list) – Image prompts, this is not the same as init_image, they works the same way with text_prompts. Defaults to [].

  • eta (float) – Eta for ddim sampling. Defaults to 0.8.

  • clip_guidance_scale (int) – The Scale of influence of prompts on output image. Defaults to 1000.

  • seed (int) – Sampling seed. Defaults to None.

class mmedit.models.editors.disco_diffusion.ImageTextGuider(clip_models)[源代码]

Bases: torch.nn.Module

Disco-Diffusion uses text and images to guide image generation. We will use the clip models to extract text and image features as prompts, and then during the iteration, the features of the image patches are computed, and the similarity loss between the prompts features and the generated features is computed. Other losses also include RGB Range loss, total variation loss. Using these losses we can guide the image generation towards the desired target.

参数

clip_models (List[Dict]) – List of clip model settings.

property device

Get current device of the model.

返回

The current device of the model.

返回类型

torch.device

frame_prompt_from_text(text_prompts, frame_num=0)

Get current frame prompt.

compute_prompt_stats(text_prompts=[], image_prompt=None, fuzzy_prompt=False, rand_mag=0.05)

Compute prompts statistics.

参数
  • text_prompts (list) – Text prompts. Defaults to [].

  • image_prompt (list) – Image prompts. Defaults to None.

  • fuzzy_prompt (bool, optional) – Controls whether to add multiple noisy prompts to the prompt losses. If True, can increase variability of image output. Defaults to False.

  • rand_mag (float, optional) – Controls the magnitude of the random noise added by fuzzy_prompt. Defaults to 0.05.

cond_fn(model, diffusion_scheduler, x, t, beta_prod_t, model_stats, secondary_model=None, init_image=None, clamp_grad=True, clamp_max=0.05, clip_guidance_scale=5000, init_scale=1000, tv_scale=0.0, sat_scale=0.0, range_scale=150, cut_overview=[12] * 400 + [4] * 600, cut_innercut=[4] * 400 + [12] * 600, cut_ic_pow=[1] * 1000, cut_icgray_p=[0.2] * 400 + [0] * 600, cutn_batches=4)

Clip guidance function.

参数
  • model (nn.Module) – _description_

  • diffusion_scheduler (object) – _description_

  • x (torch.Tensor) – _description_

  • t (int) – _description_

  • beta_prod_t (torch.Tensor) – _description_

  • model_stats (List[torch.Tensor]) – _description_

  • secondary_model (nn.Module) – A smaller secondary diffusion model trained by Katherine Crowson to remove noise from intermediate timesteps to prepare them for CLIP. Ref: https://twitter.com/rivershavewings/status/1462859669454536711 # noqa Defaults to None.

  • init_image (torch.Tensor) – Initial image for denoising. Defaults to None.

  • clamp_grad (bool, optional) – Whether clamp gradient. Defaults to True.

  • clamp_max (float, optional) – Clamp max values. Defaults to 0.05.

  • clip_guidance_scale (int, optional) – The scale of influence of clip guidance on image generation. Defaults to 5000.

abstract forward(x)

forward function.

class mmedit.models.editors.disco_diffusion.SecondaryDiffusionImageNet2[源代码]

Bases: torch.nn.Module

A smaller secondary diffusion model trained by Katherine Crowson to remove noise from intermediate timesteps to prepare them for CLIP.

Ref: https://twitter.com/rivershavewings/status/1462859669454536711 # noqa

forward(input, t)

Forward function.

mmedit.models.editors.disco_diffusion.alpha_sigma_to_t(alpha, sigma)[源代码]

convert alpha&sigma to timestep.

Read the Docs v: latest
Versions
master
latest
stable
zyh-doc-notfound-extend
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.