VAIQ_ERROR - 3.5 English

Vitis AI User Guide (UG1414)

Document ID
UG1414
Release Date
2023-09-28
Version
3.5 English

Vai_q_PyTorch displays an error message when there is an issue causing problems or incompleteness in the quantization result (refer to the message text for details). The format of this kind of message is "[VAIQ_ERROR][MESSAGE_ID]: message text"

The following table consists of the important error messages:

Table 1. Vai_q_PyTorch error message table
Message ID Description
QUANTIZER_TORCH_BIAS_CORRECTION Bias correction file in the quantization result directory does not match the current model. 
QUANTIZER_TORCH_CALIB_RESULT_MISMATCH Node name mismatch is found when loading quantization steps of tensors. Please ensure vai_q_pytorch and PyTorch versions for the test mode are the same as those in the calibration (or QAT training) mode.
QUANTIZER_TORCH_EXPORT_ONNX The quantized module, which is based PyTorch traced model, can not be exported to ONNX due to PyTorch internal failure. The PyTorch internal failure reason is listed in the message text. May need to adjust the float model code.
QUANTIZER_TORCH_EXPORT_XMODEL Failed to convert the graph to XMODEL. Needs to check the reasons in the message text.
QUANTIZER_TORCH_FAST_FINETINE Fast fine-tuned parameter file does not exist. Call load_ft_param in the model code to load them.
QUANTIZER_TORCH_FIX_INPUT_TYPE Data type or value is illegal in arguments of quantization OP when exporting the ONNX format model.
QUANTIZER_TORCH_ILLEGAL_BITWIDTH The configuration of tensor quantization is illegal. It should be an integer and in the range given in the message text.
QUANTIZER_TORCH_IMPORT_KERNEL Importing vai_q_PyTorch library file error. Check whether the PyTorch version matches the vai_q_pytorch version (PyTorch_nndct.__version__).
QUANTIZER_TORCH_NO_CALIB_RESULT Quantization result file does not exist. Please check whether calibration is done or not.
QUANTIZER_TORCH_NO_CALIBRATION Quantization calibration is not performed completely. Check if the module FORWARD function is called. The forward function of torch_quantizer.quant_model must be explicitly called in the user code. Please refer to the example code at https://github.com/Xilinx/Vitis-AI/blob/master/src/Vitis-AI-Quantizer/vai_q_PyTorch/example/resnet18_quant.py.
QUANTIZER_TORCH_NO_FORWARD torch_quantizer.quant_model FORWARD function must be called before exporting quantization result. Please refer to the example code at https://github.com/Xilinx/Vitis-AI/blob/master/src/Vitis-AI-Quantizer/vai_q_PyTorch/example/resnet18_quant.py.
QUANTIZER_TORCH_OP_REGIST The type of OP can't be registered multiple times.
QUANTIZER_TORCH_PYTORCH_TRACE Failed to get PyTorch traced graph from model and input arguments. The PyTorch internal failure reason is reported in the message text. May need to adjust the float model code.
QUANTIZER_TORCH_QUANT_CONFIG Quantization configuration items are illegal. Refer to the message text.
QUANTIZER_TORCH_SHAPE_MISMATCH Tensors shapes are mismatched. Refer to the message text.
QUANTIZER_TORCH_TORCH_VERSION Pytorch version is not supported for the function or does not match the vai_q_PyTorch version (PyTorch_nndct.__version__). Refer to the message text.
QUANTIZER_TORCH_XMODEL_BATCHSIZE Batch size must be 1 when exporting XMODEL.
QUANTIZER_TORCH_INSPECTOR_OUTPUT_FORMAT Inspector only supports dump SVG or PNG format.
QUANTIZER_TORCH_INSPECTOR_INPUT_FORMAT Inspector no longer supports fingerprint. Please provide the architecture name instead.
QUANTIZER_TORCH_UNSUPPORTED_OPS The quantization of the op is not supported.
QUANTIZER_TORCH_TRACED_NOT_SUPPORT The model produced by 'torch.jit.script' is not supported in vai_q_PyTorch.
QUANTIZER_TORCH_NO_SCRIPT_MODEL vai_q_PyTorch does not find any script model.
QUANTIZER_TORCH_REUSED_MODULE The quantized module has been called multiple times in the forward pass. If you want to share quantized parameters in multiple calls, call trainable_model with "allow_reused_module=True."
QUANTIZER_TORCH_DATA_PARALLEL_NOT_ALLOWED torch.nn.DataParallel object is not allowed.
QUANTIZER_TORCH_INPUT_NOT_QUANTIZED Input is not quantized. Please use QuantStub/DeQuantStub to define the quantization scope.
QUANTIZER_TORCH_NOT_A_MODULE Quantized operation must be an instance of "torch.nn.Module". Please replace the function with a "torch.nn.Module" object. Original source range is indicated in the message text.
QUANTIZER_TORCH_QAT_PROCESS_ERROR Must call "trainable_model" first before getting deployable model.
QUANTIZER_TORCH_QAT_DEPLOYABLE_MODEL_ERROR The given trained model has BN fused to CONV and cannot be converted to a deployable model. Make sure model.fuse_conv_bn() is not called.
QUANTIZER_TORCH_XMODEL_DEVICE XMODEL can only be exported in CPU mode. Use deployable_model(src_dir, used_for_XMODEL=True) to get a CPU model.