Please refer to Tutorial of Baseline Training and Inference on V2X-Real dataset and Tutorial of Codebook Learning on V2X-Real dataset before reading this documentation. Post-Training quantization (PTQ) involves the curation of calibration dataset and the calibration process as described in the paper.
python opencood/tools/inference_mc_quant.py ${CHECKPOINT_FOLDER} [--fusion_method intermediate] --num_cali_batches 16 --n_bits_w 8 --n_bits_a 8 --iters_w 5000num_cali_batchesrefers to the size of the calibration dataset.n_bits_wrefers to the bitwidth for weight quantization.n_bits_arefers to the bitwidth for activation quantization.iters_wrefers to the number of calibration steps.
- You could refer to
/scripts/inference_mc/inference_mc_quant.shfor example running scripts.mcstands formulti-class, which differentiates itself fromsingle-classtraining and inference.quantstands for the PTQ process that is different fromfpwhich stands for full-precision inference.