Self-adaptive whole-body manipulation for correcting failed robot grasps

30% of the time, the mobile manipulator robot occasionally misses its target when trying to grasp an object. A potential cause for this could be that the object is simply just out of the robot’s arm’s reach. Another cause could be the inaccuracies in the calibration of the arm. To solve this problem, the robot would have to first recognize that the grasp attempt failed and then adjust its position accordingly to perform a successful grasp. This entire process should be performed autonomously. I use RASCAPP, my lab’s mobile manipulator robot in the implementation of my algorithm.

A high-level sketch of the algorithm is as follows;

  1. Robot autonomously recognizes a failed grasp.
  2. Robot quantifies the failure; thus figures out how far from the target pose was the robot’s end effector after it completed the failed grasp trajectory (delta failure).
  3. Using the delta failure, the robot re-adjusts its body to minimize the chances of another failed grasp.

 

Recognizing failed grasps

A failed grasp occurs when the robot fails to enclose an object in its grippers after executing the motion plan. How does a human recognize a failed grasp? If I make it my objective to pick up a bottle from a table, how would I know if I succeeded in picking the bottle after executing my arm motion trajectory? One way would be to look at the initial position of the bottle. If the bottle is still in its initial position, then my grasp attempt failed. If it isn’t, then I could either have the bottle in my hands or I probably ended up moving the bottle to a different position instead of grasping it. Hence, it is obvious that just looking at the initial position of the bottle isn’t a conclusive approach. Another way I can know if or not the grasp was successful is to feel the bottle in my hand. If, my hand was initially empty and after the grasp episode, I sense that my hand is still empty, then it can be trivially concluded that the grasp attempt failed. If, on the other hand, I felt a bottle in my hand,  there are two possibilities; either I successfully picked up the bottle or I picked up some other object, If both the object and the bottle have a similar texture and feel, it will be impossible to determine by touch alone if the grasp was successful.

A reasonable technique would be to somehow merge both approaches. It would function in this manner;

  1. Robot attempts the grasp.
  2. If grippers close all the way after the close-gripper command, then the gripper has no object in it. Hence, the grasp failed.
  3. If grippers do not close all the way, then gripper has an object, but not necessarily the target object. Gripper, with the object held in it is brought closer to Robot’s face (3D cameras).
  4.  If the position of the target object is within a few millimeters of the position of the gripper, then the target object is in the robot’s grippers and the grasp attempt was successful.
  5. If not, then the grasp was unsuccessful.
  6. If unsuccessful and the target object is still within the robot’s camera’s field of vision, then the grasp would be re-attempted. If the object is not in the robot’s field of vision, then the object must have been pushed away during the grasp attempt [Work on getting the robot to look for occluded objects would be pursued in the future].

Quantifying the failure

 

[TO BE CONTINUED]

Leave a Reply

Your email address will not be published. Required fields are marked *