Vai_q_pytorch prints error message on screen when there is issue causing the process cannot be performed anymore (check and solve the problem according to the message text), the format of this kind of message is "[VAIQ_ERROR][MESSAGE_ID]: message text"
List important error messages in the following table:
Message ID | Description |
---|---|
QUANTIZER_TORCH_BIAS_CORRECTION | Bias correction file in quantization result directory does not match current model. |
QUANTIZER_TORCH_CALIB_RESULT_MISMATCH | Node name mismatch is found when loading quantization steps of tensors. Please make sure vai_q_pytorch version and pytorch version for test mode are the same as those in calibration (or QAT training) mode. |
QUANTIZER_TORCH_EXPORT_ONNX | The quantized module, which is based pytorch traced model, can not be exported to ONNX due to pytorch internal failure. The pytorch internal failure reason is listed in message text. May needs adjust float model code. |
QUANTIZER_TORCH_EXPORT_XMODEL | Fail to convert graph to xmodel. Needs check the reasons in message text. |
QUANTIZER_TORCH_FAST_FINETINE | Fast fine-tuned parameter file does not exist. Call load_ft_param in model code to load them. |
QUANTIZER_TORCH_FIX_INPUT_TYPE | Data type or value is illegal in arguments of quantization OP when exporting ONNX format model. |
QUANTIZER_TORCH_ILLEGAL_BITWIDTH | The configuration of tensors quantization is illegal. It should be integer, and in range given in message text. |
QUANTIZER_TORCH_IMPORT_KERNEL | Importing vai_q_pytorch library file error. Check pytorch version matching vai_q_pytorch version (pytorch_nndct.__version__) or not. |
QUANTIZER_TORCH_NO_CALIB_RESULT | Quantization result file does not exist. Please check calibration is done or not. |
QUANTIZER_TORCH_NO_CALIBRATION | Quantization calibration is not performed completely, check if module FORWARD function is called! FORWARD function of torch_quantizer.quant_model needs to be called in user code explicitly. Please refer to the example code at https://github.com/Xilinx/Vitis-AI/blob/master/src/Vitis-AI-Quantizer/vai_q_pytorch/example/resnet18_quant.py. |
QUANTIZER_TORCH_NO_FORWARD | torch_quantizer.quant_model FORWARD function must be called before exporting quantization result. Please refer to example code at https://github.com/Xilinx/Vitis-AI/blob/master/src/Vitis-AI-Quantizer/vai_q_pytorch/example/resnet18_quant.py. |
QUANTIZER_TORCH_OP_REGIST | The type of OP can't be registered multiple times. |
QUANTIZER_TORCH_PYTORCH_TRACE | Failed to get pytorch traced graph from model and input arguments. The pytorch internal failure reason is reported in message text. May needs adjust float model code. |
QUANTIZER_TORCH_QUANT_CONFIG | Quantization configuration items are illegal. Refer to the message text. |
QUANTIZER_TORCH_SHAPE_MISMATCH | Tensors shape are mismatch. Refer to the message text. |
QUANTIZER_TORCH_TORCH_VERSION | Pytorch version is not supported for the function or does not match vai_q_pytorch version (pytorch_nndct.__version__). Refer to the message text. |
QUANTIZER_TORCH_XMODEL_BATCHSIZE | Batch size must be 1 when exporting xmodel. |
QUANTIZER_TORCH_INSPECTOR_OUTPUT_FORMAT | Inspector only support dump svg or png format. |
QUANTIZER_TORCH_INSPECTOR_INPUT_FORMAT | Inspector no longer support fingerprint. Please provide architecture name instead. |
QUANTIZER_TORCH_UNSUPPORTED_OPS | The quantization of the op is not supported. |
QUANTIZER_TORCH_TRACED_NOT_SUPPORT | The model produced by 'torch.jit.script' is not supported in vai_q_pytorch. |
QUANTIZER_TORCH_NO_SCRIPT_MODEL | vai_q_pytorch does not find any script model. |
QUANTIZER_TORCH_REUSED_MODULE | The quantized module has been called multiple times in forward pass. If you want to share quantized parameters in multiple calls, call trainable_model with "allow_reused_module=True" |
QUANTIZER_TORCH_DATA_PARALLEL_NOT_ALLOWED | torch.nn.DataParallel object is not allowed. |
QUANTIZER_TORCH_INPUT_NOT_QUANTIZED | Input is not quantized. Please use QuantStub/DeQuantStub to define quantization scope. |
QUANTIZER_TORCH_NOT_A_MODULE | Quantized operation must be instance of "torch.nn.Module", please replace the function by a "torch.nn.Module" object. Original source range is indicated in message text. |
QUANTIZER_TORCH_QAT_PROCESS_ERROR | Must call "trainable_model" first before getting deployable model. |
QUANTIZER_TORCH_QAT_DEPLOYABLE_MODEL_ERROR | The given trained model has BN fused to CONV and cannot be converted to a deployable model. Make sure model.fuse_conv_bn() is not called. |
QUANTIZER_TORCH_XMODEL_DEVICE | Xmodel can only be exported in CPU mode, use deployable_model(src_dir, used_for_xmodel=True) to get a CPU model. |