Since V2X-Real utilizes multi-class predictions, the exact commands would be slightly different from running on OPV2V and DAIR-V2X. These training and testing instructions apply to all end-to-end training methods. Note that we adopt HEAL as the codebase structure and currently we only feature collaboration base training.
We uses yaml file to configure all the parameters for training. To train your own model from scratch or a continued checkpoint, run the following commonds:
python opencood/tools/train.py -y ${CONFIG_FILE} [--model_dir ${CHECKPOINT_FOLDER}]Arguments Explanation:
-yorhypes_yaml: the path of the training configuration file, e.g.opencood/hypes_yaml/opv2v/LiDAROnly/lidar_fcooper.yaml, meaning you want to train a FCooper model. We elaborate each entry of the yaml in the exemplar config fileopencood/hypes_yaml/exemplar.yaml.model_dir(optional) : the path of the checkpoints. This is used to fine-tune or continue-training. When themodel_diris given, the trainer will discard thehypes_yamland load theconfig.yamlin the checkpoint folder. In this case, ${CONFIG_FILE} can beNone,
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --use_env opencood/tools/train_ddp.py -y ${CONFIG_FILE} [--model_dir ${CHECKPOINT_FOLDER}]--nproc_per_node indicate the GPU number you will use.
python opencood/tools/inference_mc.py --model_dir ${CHECKPOINT_FOLDER} [--fusion_method intermediate]inference_mc.pyhas more optional args, you can inspect into this file.[--fusion_method intermediate]the default fusion method is intermediate fusion. According to your fusion strategy in training, available fusion_method can be:- single: only ego agent's detection, only ego's gt box. [only for late fusion dataset]
- no: only ego agent's detection, all agents' fused gt box. [only for late fusion dataset]
- late: late fusion detection from all agents, all agents' fused gt box. [only for late fusion dataset]
- early: early fusion detection from all agents, all agents' fused gt box. [only for early fusion dataset]
- intermediate: intermediate fusion detection from all agents, all agents' fused gt box. [only for intermediate fusion dataset]
- You could refer to
/scriptsfolder to for example running scripts.mcstands formulti-class, which differentiates itself fromsingle-classtraining and inference./scripts/inference_mc/inference_mc_fp.shrefers to full-precision inference that is differentiated with/scripts/inference_mc/inference_mc_quant.shwhich involves a post-training quantization (PTQ) stage.
Early fusion involves fusing only raw LiDAR point cloud data from neighboring agents to create a more holistic view of the enviornment, leading to better predictions. Late fusion involves receiving independent 3D detections (bounding boxes) from neighboring agents to produce consistent and more accurate predictions.
We uses yaml file to configure all the parameters for training. To train your own model from scratch or a continued checkpoint, run the following commonds:
python ./opencood/tools/train.py -y ./opencood/hypes_yaml/v2x_real/LiDAROnly/lidar_[early/late]_mc_fusion.yamlpython opencood/tools/inference_mc.py --model_dir ${CHECKPOINT_FOLDER} [--fusion_method early/late]- You could also run single class early/late fusion with yaml files in the
./opencood/hypes_yaml/dairv2x/LiDAROnlyfolder underlidar_early_fusion.yamlandlidar_late_fusion.yamlrespectively. You will need to test the model withinference.pyas opposed toinference_mc.pyhowever.