Skip to content

Latest commit

 

History

History

There are three documentation files-

  • BACKGROUND.md: This gives a high level overview of how the robot camera calibration is performed and tips for data collection to get accurate results.
  • MULTI_CAM_CALIB.md: This provides documentation for performing multi camera extrinsic calibration using Kalibr- a calibration tool for multicamera and IMU systems.
  • ROBOT_CALIB_USAGE.md: This provides documentation for using the single camera-robot calibration.

Description of scripts:

In this repo, one could also find a list of "test" scripts that test functionalities of the code base, as well as, provide the user an idea on how to use the code base. You can find the description of each script below-

  • Operation:

    • An arucoboard with its parameters specified by the ArucoBoardData object is detected and the its pose estimated on a test image that is stored locally in ../tests/data.
  • Expected output:

    • The mean and variance of the reprojected error computed for each detected corner is to printed
    • Detected markers and estimated frame of the aruco board is printed on drawn over the input image and displayed in a new window.
  • Operation:
    • An arucotag with its parameters specified by the ArucoTag is detected and its pose estimated on a test image that is stored localled in ../tests/data.
  • Expected output:
    • The mean and variance of the reprojected error computed for each detected corner is to printed
    • Detected markers and estimated frame of the aruco board is printed on drawn over the input image and displayed in a new window.
  • Operation:
    • Creates an image(.jpeg or .png) of an arucoboard with specified parameters passed as an argument.
  • Expected output:
    • An image of an arucoboard with parameters specified in the script store in here
  • Operation:

    • Gets a live image from a realsense RGBD camera
    • Opens up a window where you can click the top-left and bottom-right vertices of the 2D bbox, press escape after recording these two point clicks.
  • Expected output:

    • Shows the image with a bounding box drawn
    • shows only the region inside the bbox.
    • Shows a segmentation map of the bounding box
    • Shows segmentation performed by filtering pixels greater than average depth

Note: make sure you have activated the right conda environment

  • Operation:
    • Loads weights mentioned in test_config dictionary
    • Gets a live RGBD image from a camera
    • Should get input bounding box from the user
    • Gets segmap from bounding box based on boolean variable in test_config dictionary
  • Expected Output
    • Asks user to click the top-left and bottom-right corner of the bounding box of the object we wish to grasp.
    • Run contactgraspnet inference and shows gripper pose frames in pointcloud in an open3d window.

Note: make sure you have activated the right conda environment

  • Operation:
    • Gets an image using camera driver and displays that image in a gui
    • Loads dope model based on .yaml file to load weights for soupcan detector
    • Performs Pose estimation on the input image to detect pose of a YCB soupcan
    • Make sure there's is a YCB soupcan in the field-of-view of the camera.
  • Expected output:
    • An image named test.jpg with 3D bounding box overlayed here
  • Operation:
    • Loads mesh for the franka gripper
  • Expected output:
    • shows a open3d window visualizing franka gripper
  • Pre-requesites:
    • This script is used to show how we could record desired grasps for any arbitrary object. using default franka arm and franka gripper
    • This script also assume only one realsense camera is connected to the workstation computer
    • The object can have any one of the available AR markers or we can use a pose estimator
    • This script assumes the robot and camera already calibrate extrinsically and it works for both camera in hand and in env case
    • The robot needs to be in "white" status LED mode.
    • In the realtime computer, ensure "read_states" server is running by running the following instruction in the realtime computer(docker).
    cd <path_to_franka_control_suite>/franka_control_suite/build 
    ./read_states <robot_ip> <realtime_pc_ip> <zmq_port_number>
    
    • Edit lines 8-18 test_grasp_recorder_aruco.py to your corresponding test case.
    • Ensure the object is not moved throught the time this script is run
  • Operation:
    • The object's pose is first estimated using the AR tag pose estimator(can also be replaced by any pose estimator for the object)
    • Now we can move the gripper by hand (in gravity compensation mode) to desired grasp poses. Note: Do not close the gripper and record this pose with respect to the object's frame of reference.
    • Collect as many grasp poses as you want.
  • Expected Output:
    • A yaml file specified at the location in the test_config dictionary to contain 4x4 arrays representing grasp poses.
  • Pre-requesites:
    • This is an object centric pick and place script. The object can be either defined using an arucoboard or is an object detected by megapose, this choice is passed to the test_config dictionary in "use_aruco_board_pose_estimation" key
    • This script is used to perform pick and place operations for objects for which we have already recorded desired grasps using one of test_grasp_recorder_megapose.py or test_grasp_recorder_aruco.py and this test is for default franka arm and franka gripper
    • This script assumes the robot and camera already calibrate extrinsically and it works for both camera in hand and in env case.
    • This script also assumes only one realsense camera is connected to the workstation computer.
    • Frankapy's server should have already started and the status LED should be blue.
    • Edit lines 12-20 test_pick_n_place.py to your corresponding test case.
  • Operation:
    • The object to be grasped is first localized using pose estimation using AR markers(can also be replaced by any pose estimator for the object)
    • A grasp is selected from the grasps already recorded using test_grasp_recorder.py
    • This grasp is used to pick the object and then place at a position offset in the +ve X direction in the robot base frame.
  • Expected output:
    • The end-effector first goes to pregrasp pose and then to the grasp pose
    • Picks up the object
    • Then places the object at a position offset in the +ve X direction in the robot base frame.
  • Operation:

    • This script tests the Camera class implemented for the realsense camera using the pyrealsense sdk.
    • This script will read the depth and RGB images, and visualize the data
  • Expected output:

    • The depth and RGB images that are read by the camera object will be displayed in a window
    • The depth and RGB images are also used to create a colored point cloud that is displayed in a window.
  • Operation:
    • This tests arucoboard detection on images using camera intrinsics both subscribed from a ROS node publisher.
    • Checkout the description of the arucoboard being used in this test.
    • The topic names in this script are set for realsense camera ROS nodes, so run the command below in a new terminal before running this script.
    roslaunch realsense2_camera rs_camera.launch
    
  • Expected output:
    • Display of an image with the detected markers and estimated arucoboard frame drawn.
    • Printing the mean and variance of the reprojection errors computed over each detected corner point.
  • Operation:
    • This tests arucotag detection on images using camera intrinsics both subscribed from a ROS node publisher.
    • Checkout the description of the arucotag being used in this test.
    • The topic names in this script are set for realsense camera ROS nodes, so run the command below in a new terminal before running this script.
    roslaunch realsense2_camera rs_camera.launch
    
  • Expected output:
    • Display of an image with the detected marker and estimated arucotag frame drawn.
    • Printing the mean and variance of the reprojection errors computed over each detected corner point.
  • Operation:
    • This tests the pose estimation of objects that have an arucoboard/ aruco tag attached to it.
    • The pose of the object is estimated by first estimating the pose of the arucoboard/aruco tag attached and then by multiplying the rigibody transform between the object's frame and the arucoboard/tag frame.
  • Expected output:
    • Display of an image with the detected marker and estimated object frame drawn.
    • Printing the mean and variance of the reprojection errors computed over each detected corner point of the arucoboard/arucotag