Shortcuts

mmedit.evaluation.metrics.equivariance

Module Contents

Classes

Equivariance

Metric for generative metrics. Except for the preparation phase

eq_iterator

class mmedit.evaluation.metrics.equivariance.Equivariance(fake_nums: int, real_nums: int = 0, fake_key: Optional[str] = None, real_key: Optional[str] = 'img', need_cond_input: bool = False, sample_mode: str = 'ema', sample_kwargs: dict = dict(), collect_device: str = 'cpu', prefix: Optional[str] = None, eq_cfg=dict())[源代码]

Bases: mmedit.evaluation.metrics.base_gen_metric.GenerativeMetric

Metric for generative metrics. Except for the preparation phase (prepare()), generative metrics do not need extra real images.

参数
  • fake_nums (int) – Numbers of the generated image need for the metric.

  • real_nums (int) – Numbers of the real image need for the metric. If -1 is passed means all images from the dataset is need. Defaults to 0.

  • fake_key (Optional[str]) – Key for get fake images of the output dict. Defaults to None.

  • real_key (Optional[str]) – Key for get real images from the input dict. Defaults to ‘img’.

  • need_cond_input (bool) – If true, the sampler will return the conditional input randomly sampled from the original dataset. This require the dataset implement get_data_info and field gt_label must be contained in the return value of get_data_info. Noted that, for unconditional models, set need_cond_input as True may influence the result of evaluation results since the conditional inputs are sampled from the dataset distribution; otherwise will be sampled from the uniform distribution. Defaults to False.

  • sample_model (str) – Sampling mode for the generative model. Support ‘orig’ and ‘ema’. Defaults to ‘ema’.

  • collect_device (str) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Defaults to None.

  • sample_kwargs (dict) – Sampling arguments for model test.

name = 'Equivariance'[源代码]
process(data_batch: dict, data_samples: Sequence[dict]) None[源代码]

Process one batch of data samples and predictions. The processed results should be stored in self.fake_results, which will be used to compute the metrics when all batches have been processed.

参数
  • data_batch (dict) – A batch of data from the dataloader.

  • data_samples (Sequence[dict]) – A batch of outputs from the model.

get_metric_sampler(model: torch.nn.Module, dataloader: torch.utils.data.dataloader.DataLoader, metrics: List[mmedit.evaluation.metrics.base_gen_metric.GenerativeMetric])[源代码]

Get sampler for generative metrics. Returns a dummy iterator, whose return value of each iteration is a dict containing batch size and sample mode to generate images.

参数
  • model (nn.Module) – Model to evaluate.

  • dataloader (DataLoader) – Dataloader for real images. Used to get batch size during generate fake images.

  • metrics (List['GenerativeMetric']) – Metrics with the same sampler mode.

返回

Sampler for generative metrics.

返回类型

dummy_iterator

compute_metrics(results) dict[源代码]

Compute the metrics from processed results.

参数

results (list) – The processed results of each batch.

返回

The computed metrics. The keys are the names of the metrics, and the values are corresponding results.

返回类型

dict

_collect_target_results(target: str) Optional[list][源代码]

Collect function for Eq metric. This function support collect results typing as Dict[List[Tensor]]`.

参数

target (str) – Target results to collect.

返回

The collected results.

返回类型

Optional[list]

class mmedit.evaluation.metrics.equivariance.eq_iterator(batch_size, max_length, sample_mode, eq_cfg, sample_kwargs)[源代码]
__iter__() Iterator[源代码]
__len__() int[源代码]
__next__() dict[源代码]
Read the Docs v: latest
Versions
master
latest
stable
zyh-doc-notfound-extend
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.