We test on the KITTI RAW sequences
Please refer to this naive method to produce poses for every sequence.
Check kitti_visualize repo.
Baseline:
## copy example config
cd config
cp kitti_wpose_example kitti_wpose.py
## Modify config path
nano kitti_wpose.py
cd ..
## Train
./launcher/train.sh configs/kitti_wpose.py 0 $experiment_name
## Evaluation
python3 scripts/test.py configs/kitti_wpose.py 0 $CHECKPOINT_PATH
It's fine to just use the baseline model for projects. After training baseline, you can further re-train with self-distillation:
## export checkpoint
python3 monodepth/transform_teacher.py $Pretrained_checkpoint $output_compressed_checkpoint
## copy example config
cd config
cp distill_kitti_example distill_kitti.py
## Modify config path and checkpoint path based on $output_compressed_checkpoint
nano distill_kitti.py
cd ..
## Train
./launcher/train.sh configs/distill_kitti.py 0 $experiment_name
Check demos/demo.ipynb for visualizing datasets and simple demos.
We support exporting pretrained model to onnx model, and you need to install onnx and onnxruntime.
python3 scripts/onnx_export.py $CONFIG_FILE $CHECKPOINT_PATH $ONNX_PATH
- Launch kitti_visualize to stream image data topics and Rviz visualization.
- Launch monodepth_ros to infer on camera topics.