Posts

Gripper Status decoded

Image
 Gripper used: ROBOTIQ 2F-140 According to the Instruction Manual  of the gripper (pages 38-70), I created a Gripper Status Class (DTO), with all the properties that can be read from the RS485 communication between the computer and the gripper driver. Then I read every bytes (and its bits) to translate the meaning of every status message.  The leading translation was built upon pages 44, 45, and 46 of the Gripper Instructions Manual . The image above shows almost every field of this Gripper Status message.  The Object Detected boolean property is not printed on this image. The gripper has 3 main statuses : activated reset automatic release Automatic Release is a specific command that should be sent to the gripper whenever the robot feels some kind of fault/error. This command will slowly open (or close - the user can choose) the fingers until the limited position.  Every time Automatic Release is commanded, the gripper needs to be re-activated to work properly again. ----------------

RobotIQ gripper force test experiment #1

Image
For a better understanding of how does the force  value work and what it really translates, I performed a simple test: Requested to completely close the gripper , at the lower possible speed . At the same time, I requested the instant status ( position and force ) from the gripper. This status request was made at 10 Hz (10 times per second). I did this test twice: Without any object (two gripper fingers touching each other) With a soft red ball that can be squeezed (to check for force values variance) The result showed a high value of force right before the last gripper fingers position, both when there was no object and when the robot grabbed the soft red ball.  After this force high value, the gripper has become stable, not moving its fingers anymore and holding the same force value for the rest of the time. We can see the results in this small video:

RobotIQ Gripper Force values

Image
    Gripper Status Closed Gripper Gripper holding wood-block (pose #1) Gripper holding wood-block (pose #2)   Gripper holding wood-block (pose #3) Gripper holding tennis ball     My conclusion:   This 'force' value has nothing to do with the shape, weight and/or pose of the object that it is being hold  Furthermore, this 'force' value does not respond to any perturbations during the object holding (I tried it myself)

UR10e Force and Torque values

Image
After launching everything (arm & gripper) as explained in some previous posts, I discover the following, regarding the UR10e force/torque values . rostopic echo /joint_states   header:   seq: 482730   stamp:     secs: 1651760184     nsecs: 917628129   frame_id: '' name:   - elbow_joint   - shoulder_lift_joint   - shoulder_pan_joint   - wrist_1_joint   - wrist_2_joint   - wrist_3_joint position: [1.2160757223712366, -1.154092625980713, -0.11001712480653936, -1.7416936359801234, -1.702087704335348, -0.9789927641498011] velocity: [-0.0, -0.0, 0.0, 0.0, 0.0, 0.0] effort : [-4.225266456604004, -6.093687534332275, 0.31768253445625305, -0.5379326343536377, -0.15823310613632202, 0.06207980215549469]   We can see on the documentation the following:   # This is a message that holds data to describe the state of a set of torque controlled joints.  # # The state of each joint (revolute or prismatic) is defined by: #  * the position of the joint (rad or m), #  * the velocity of the j

RGB-D tracking + UR10e following & picking/placing

Image
Following the first use case described in this last post , the result of the experiment can be seen in this video: There are still several improvements to be made to this job1 task: The job should wait for the response of each controller before sending the next request The robot xacro should be extended to include every object that compose the real scenario  The procedure should have an interactive way of telling the robot that is time to pick up the object The use case should ensure that the object pick was successful (or not) This issues can be followed here .

UR10e control architecture

Image
Now that I am able to control the arm of the robot as well as the gripper, it is now time to create an interface that can be used to control both the arm and the gripper.  This interface should be completely agnostic to any use case that is being developed. Since gripper control and arm control are two independent procedures that can be running simultaneously, this interface should put these two controllers at the same level. Following this philosophy, any job that I want to develop in the future will be able to call any one of these controllers by the same method! Here we can see the structure of the first complete job.  In this case, an object that is continuously being tracked through a RGB-D camera is followed by the robot arm and then is picked up and stored in a box.  As we can see, both controllers (arm and gripper) are at the same level, and the job1 ROS node is communicating with both of them simultaneously. The gripper controller is described in this previous post . The arm

Real-time UR10e following a tracked object

Image
For a first trial, it was developed this demonstration of a real-time followed object that is being tracked using one RGB-D camera (Intel RealSense D435).   As seen in this previous post , ViSP is used to process the images acquired by the RealSense RGB-D camera and to continuously track the object . A TCP/IP Socket Connection is established (inside the same computer) between ViSP and ROS . This socket communication is responsible for bringing the geometric transformation between the camera and the object to the ROS environment. This previous post describes with more detailed information this connection. Besides the transformation between the camera and the object, it is also required to know the transformation between the robot and the camera . For the robot to understand the position and orientation of the object in reference to the robot himself, this transformation becomes crucial. For getting it, I performed a manual calibration , as described in this previous post . Finally,