Object Tracking using ViSP model-based tracker

Following the study presented in one of my last posts, I have now the goal of tracking an object 3D pose from a single color image, in a continuous and reliable way.

For that, I will use a standard green lego piece which will be tracked in a totally white background. The idea of this experiment is not to create a real life similar situation, but instead to understand the possibilities that this ViSP model-based tracker can provide.

 

Starting from the tutorial-mb-generic-tracker-live, I created a dataset of 44 images where the tracker was correctly detecting the object position and orientation. 

For collecting these 44 images, sometimes it was necessary, by hand, to indicate where were the object corners, to ensure that all the images within the dataset can be trustworthy. Besides the images, the dataset also includes a .bin file, where the information about each detection of each image is stored.

dataset: 
  • 44 images (.jpg)
  • .bin file 
After creating this dataset, it's now time to create my own tracking process.
 
I chose to only use the ViSP Moving Edges tracker, and ignore the KLT tracker, since there are no relevant keypoints in each object face.
 
With the real time Moving Edges tracker values, and using the previously created dataset, it was pretty simple to auto initialize the tracking (based on the dataset) every time the previous loop lost the object tracking.
 
In this case, "losing the object" could mean two things:  
  • The object is being wrongly tracked
  • The object is not being recognized at all 
In any of these 2 cases, the tracker will automatically re-initialize using the dataset.
 

 














This was the result:


Moving Edges tracker configuration:
  • mask size = 5
  • mask number = 180
  • range = 8 
  • threshold = 10000
  • mu1 = 0.5
  • mu2 = 0.5
  • sample step = 4
Threshold values for identifying a bad detection:
  • projection error < 28
  • 120 < number of feature edges < 300
As it was possible to see, this experiment was better than the first one with the board in the sense that the running tracking could automatically restart if a wrong object detection occur. Furthermore, the tracking itself has, in general, more quality, mainly because of the controlled environment and the easily distinction between the object and the background.
 
Nevertheless, the result is considered not good enough for the main purpose of this project. One major improvement could be achieved if the depth data is also considered for tracking the object pose. 
Creating the RGB-D tracking process will probably be the very next step. 
 
;


 

 





Comments

Popular posts from this blog

Remote Control of UR10e via MoveIt (ROS)

Installing External Control URCap on robot Teach Pendant

UR10e control architecture