VAIQ_WARN - 3.0 English

Vitis AI User Guide (UG1414)

Document ID
Release Date
3.0 English
Vai_q_pytorch prints warning message on screen when there is issue may causing the quantization result has problem or incomplete (check according to the message text), but the process can be performed to its end, the format of this kind of message is "[VAIQ_WARN][MESSAGE_ID]: message text"

List important warning messages in the following table:

Table 1. Vai_q_pytorch warning message table
Message ID Description
QUANTIZER_TORCH_BATCHNORM_AFFINE BatchNorm OP attribute affine=False has been replaced by affine=True when parsing the model.
QUANTIZER_TORCH_BITWIDTH_MISMATCH Bit width setting in configuration file is conflict with that from torch_quantizer API, will use that in configuration file.
QUANTIZER_TORCH_CONVERT_XMODEL Convert to xmodel failed. Check message text to locate the reason.
QUANTIZER_TORCH_CUDA_UNAVAILABLE CUDA (HIP) is not available, change device to CPU
QUANTIZER_TORCH_DATA_PARALLEL Data parallel is not supported. The wrapper 'torch.nn.DataParallel' has been removed in vai_q_pytorch.
QUANTIZER_TORCH_DEPLOY_MODEL Only quantization aware training process has deployable model.
QUANTIZER_TORCH_DEVICE_MISMATCH The Device of input arguments mismatch with quantizer device type.
QUANTIZER_TORCH_EXPORT_XMODEL Failed to generate xmodel due to some reasons. Refer to the message text.
QUANTIZER_TORCH_FINETUNE_IGNORED Fast fine-tune function will be ignored in test mode!
QUANTIZER_TORCH_FLOAT_OP vai_q_pytorch recognize the list OP as a float operator by default.
QUANTIZER_TORCH_INSPECTOR_PATTERN The OP may be fused by compiler and will be assigned to DPU.
QUANTIZER_TORCH_LEAKYRELU Force to change negative_slope of LeakyReLU to 0.1015625 because DPU only supports this value. It is recommended to change all negative_slope of LeakyReLU to 0.1015625 and re-train the float model for better deployed model accuracy.
QUANTIZER_TORCH_MATPLOTLIB matplotlib is needed for visualization but not found. It needs to be installed.
QUANTIZER_TORCH_MEMORY_SHORTAGE There is no enough memory for fast fine-tune and this process will be ignored!. Try to use a smaller calibration dataset.
QUANTIZER_TORCH_NO_XIR Can't find XIR package in environment. It needs to be installed.
QUANTIZER_TORCH_REPLACE_RELU6 ReLU6 has been replaced by ReLU.
QUANTIZER_TORCH_REPLACE_SIGMOID Sigmoid has been replaced by Hardsigmoid.
QUANTIZER_TORCH_REPLACE_SILU SiLU has been replaced by Hardswish.
QUANTIZER_TORCH_SHIFT_CHECK Quantization scale is too large or too small.
QUANTIZER_TORCH_TENSOR_NOT_QUANTIZED Some tensors are not quantized, please check their particularity.
QUANTIZER_TORCH_TENSOR_TYPE_NOT_QUANTIZABLE The tensor type of the node cannot be quantized. Only support float32/double/float16 quantization.
QUANTIZER_TORCH_TENSOR_VALUE_INVALID The tensor has "inf" or "nan" value. Quantization for this tensor is ignored.
QUANTIZER_TORCH_TORCH_VERSION Only support exporting torch script with pytorch 1.10 and later version.
QUANTIZER_TORCH_XIR_MISMATCH XIR version does not match current vai_q_pytorch.
QUANTIZER_TORCH_XMODEL_DEVICE Not support to dump xmodel when target device is not DPU.
QUANTIZER_TORCH_REUSED_MODULE Reused module may lead to low accuracy of QAT, make sure this is what you expect. Refer to the message text to locate the module with issue.
QUANTIZER_TORCH_DEPRECATED_ARGUMENT The argument "device" is no longer needed. Device information is get from the model directly.
QUANTIZER_TORCH_SCALE_VALUE Exported scale values are not trained.