Shortcuts

mmedit.apis.inferencers.base_mmedit_inferencer

Module Contents

Classes

BaseMMEditInferencer

Base inferencer.

Attributes

InputType

InputsType

PredType

ImgType

ResType

mmedit.apis.inferencers.base_mmedit_inferencer.InputType[源代码]
mmedit.apis.inferencers.base_mmedit_inferencer.InputsType[源代码]
mmedit.apis.inferencers.base_mmedit_inferencer.PredType[源代码]
mmedit.apis.inferencers.base_mmedit_inferencer.ImgType[源代码]
mmedit.apis.inferencers.base_mmedit_inferencer.ResType[源代码]
class mmedit.apis.inferencers.base_mmedit_inferencer.BaseMMEditInferencer(config: Union[mmedit.utils.ConfigType, str], ckpt: Optional[str], device: Optional[str] = None, extra_parameters: Optional[Dict] = None, seed: int = 2022, **kwargs)[源代码]

Bases: mmengine.infer.BaseInferencer

Base inferencer.

参数
  • config (str or ConfigType) – Model config or the path to it.

  • ckpt (str, optional) – Path to the checkpoint.

  • device (str, optional) – Device to run inference. If None, the best device will be automatically used.

  • result_out_dir (str) – Output directory of images. Defaults to ‘’.

func_kwargs[源代码]
func_order[源代码]
extra_parameters[源代码]
_init_model(cfg: Union[mmedit.utils.ConfigType, str], ckpt: Optional[str], device: str) None[源代码]

Initialize the model with the given config and checkpoint on the specific device.

_init_pipeline(cfg: mmedit.utils.ConfigType) mmengine.dataset.Compose[源代码]

Initialize the test pipeline.

_init_extra_parameters(extra_parameters: Dict) None[源代码]

Initialize extra_parameters of each kind of inferencer.

_update_extra_parameters(**kwargs) None[源代码]

update extra_parameters during run time.

_dispatch_kwargs(**kwargs) Tuple[Dict, Dict, Dict, Dict][源代码]

Dispatch kwargs to preprocess(), forward(), visualize() and postprocess() according to the actual demands.

__call__(**kwargs) Union[Dict, List[Dict]][源代码]

Call the inferencer.

参数

kwargs – Keyword arguments for the inferencer.

返回

Results of inference pipeline.

返回类型

Union[Dict, List[Dict]]

get_extra_parameters() List[str][源代码]

Each inferencer may has its own parameters. Call this function to get these parameters.

返回

List of unique parameters.

返回类型

List[str]

postprocess(preds: PredType, imgs: Optional[List[numpy.ndarray]] = None, is_batch: bool = False, get_datasample: bool = False) Union[ResType, Tuple[ResType, numpy.ndarray]][源代码]

Postprocess predictions.

参数
  • preds (List[Dict]) – Predictions of the model.

  • imgs (Optional[np.ndarray]) – Visualized predictions.

  • is_batch (bool) – Whether the inputs are in a batch. Defaults to False.

  • get_datasample (bool) – Whether to use Datasample to store inference results. If False, dict will be used.

返回

Inference results as a dict. imgs (torch.Tensor): Image result of inference as a tensor or

tensor list.

返回类型

result (Dict)

_pred2dict(pred_tensor: torch.Tensor) Dict[源代码]

Extract elements necessary to represent a prediction into a dictionary. It’s better to contain only basic data elements such as strings and numbers in order to guarantee it’s json-serializable.

参数

pred_tensor (torch.Tensor) – The tensor to be converted.

返回

The output dictionary.

返回类型

dict

visualize(inputs: list, preds: Any, show: bool = False, result_out_dir: str = '', **kwargs) List[numpy.ndarray][源代码]

Visualize predictions.

Customize your visualization by overriding this method. visualize should return visualization results, which could be np.ndarray or any other objects.

参数
  • inputs (list) – Inputs preprocessed by _inputs_to_list().

  • preds (Any) – Predictions of the model.

  • show (bool) – Whether to display the image in a popup window. Defaults to False.

  • result_out_dir (str) – Output directory of images. Defaults to ‘’.

返回

Visualization results.

返回类型

List[np.ndarray]

Read the Docs v: latest
Versions
master
latest
stable
zyh-doc-notfound-extend
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.