Vai_q_PyTorch displays an error message when there is an issue causing problems or incompleteness in the quantization result (refer to the message text for details). The format of this kind of message is "[VAIQ_ERROR][MESSAGE_ID]: message text"
The following table consists of the important error messages:
|Bias correction file in the quantization result directory does not match the current model.
|Node name mismatch is found when loading quantization steps of tensors. Please ensure vai_q_pytorch and PyTorch versions for the test mode are the same as those in the calibration (or QAT training) mode.
|The quantized module, which is based PyTorch traced model, can not be exported to ONNX due to PyTorch internal failure. The PyTorch internal failure reason is listed in the message text. May need to adjust the float model code.
|Failed to convert the graph to XMODEL. Needs to check the reasons in the message text.
|Fast fine-tuned parameter file does not exist. Call load_ft_param in the model code to load them.
|Data type or value is illegal in arguments of quantization OP when exporting the ONNX format model.
|The configuration of tensor quantization is illegal. It should be an integer and in the range given in the message text.
|Importing vai_q_PyTorch library file error. Check whether the PyTorch version matches the vai_q_pytorch version (PyTorch_nndct.__version__).
|Quantization result file does not exist. Please check whether calibration is done or not.
|Quantization calibration is not performed completely. Check if the module FORWARD function is called. The forward function of torch_quantizer.quant_model must be explicitly called in the user code. Please refer to the example code at https://github.com/Xilinx/Vitis-AI/blob/master/src/Vitis-AI-Quantizer/vai_q_PyTorch/example/resnet18_quant.py.
|torch_quantizer.quant_model FORWARD function must be called before exporting quantization result. Please refer to the example code at https://github.com/Xilinx/Vitis-AI/blob/master/src/Vitis-AI-Quantizer/vai_q_PyTorch/example/resnet18_quant.py.
|The type of OP can't be registered multiple times.
|Failed to get PyTorch traced graph from model and input arguments. The PyTorch internal failure reason is reported in the message text. May need to adjust the float model code.
|Quantization configuration items are illegal. Refer to the message text.
|Tensors shapes are mismatched. Refer to the message text.
|Pytorch version is not supported for the function or does not match the vai_q_PyTorch version (PyTorch_nndct.__version__). Refer to the message text.
|Batch size must be 1 when exporting XMODEL.
|Inspector only supports dump SVG or PNG format.
|Inspector no longer supports fingerprint. Please provide the architecture name instead.
|The quantization of the op is not supported.
|The model produced by 'torch.jit.script' is not supported in vai_q_PyTorch.
|vai_q_PyTorch does not find any script model.
|The quantized module has been called multiple times in the forward pass. If you want to share quantized parameters in multiple calls, call trainable_model with "allow_reused_module=True."
|torch.nn.DataParallel object is not allowed.
|Input is not quantized. Please use QuantStub/DeQuantStub to define the quantization scope.
|Quantized operation must be an instance of "torch.nn.Module". Please replace the function with a "torch.nn.Module" object. Original source range is indicated in the message text.
|Must call "trainable_model" first before getting deployable model.
|The given trained model has BN fused to CONV and cannot be converted to a deployable model. Make sure model.fuse_conv_bn() is not called.
|XMODEL can only be exported in CPU mode. Use deployable_model(src_dir, used_for_XMODEL=True) to get a CPU model.