-
Notifications
You must be signed in to change notification settings - Fork 11
Errors in pointcloud-to-image reprojection #54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
It could be the motion compensation. The way I remove motion distortion in the tutorial is a bit "lazy". The ground truth provides poses like T_enu_applanix at around 200Hz. If you want the best motion compensation for the lidar pointcloud, you need to interpolate the ground truth for each lidar point timestamp t and do p_enu = T_enu_applanix(t) * T_applanix_lidar * p_lidar(t). You can use SLERP interpolation for the rotation and linear interpolation on position for T_enu_applanix. |
@keenan-burnett Hello, I've done the exactly things you suggest, and the problem still exists. def get_gt_data_for_traversal(root): def is_sorted_along_axis(arr, axis): def so3_to_quaternion(so3):
def my_interpolate_poses(pose_timestamps, requested_timestamps, abs_quaternions, abs_positions):
def process_sequence(seq_list):
Would you mind to check the code above for me? |
Your code looks fine, can you post some examples of what the projection looks like using SLERP / linear interp?
|
Below are several example pairs of projection, the first image takes the motion compensation you provided, and the second one takes my version. The red circle is where I think exists slightly difference between two images, which can prove that the projections don't follow the same procedure: From above, you can see that two kinds of projection is almostly the same, even where the projection errors lie |
There's not much I can do to fix this at this point. As I mentioned above, you try adjusting the camera-lidar temporal synchronization to get better poses for the camera or you can try to adjust the camera's intrinsics. |
These projections look nearly identical, you would have to pick an example where the car is driving faster to make the difference more noticeable. |
Hi, thanks for creating this dataset, it seems very interesting! 😄
I've been looking at projecting the lidar points onto the images to generate depth maps, but have been seeing some errors while following the tutorial.
When projecting points in frames where the vehicle is static the quality of the reprojection is really high, which shows that the lidar-image calibration is very accurate.
However, when the car is in motion, I have noticed that the reprojections are incorrect (particularly on the left image border). Below I've attached a couple of examples. This seems happen even when the vehicle is travelling forward in an open street (i.e. no urban canyon).
Do you think this is due to incorrect pose estimates? Or could it be related to the pointcloud motion compensation?
As you mention in the paper, using the relative pose instead of the global pose would likely be more accurate for short timescales. Are those poses available in the dataset? As far as I can tell, there is only the raw IMU data.
Thank you!
The text was updated successfully, but these errors were encountered: