Synthetic Dataset Generation (real texture)

YOLOv7 training for our own object has proven to be very powerful if a wide enough dataset can be provided. In order to achieve this massive number of images, collected from the same source (homogeneous dataset), a synthetic dataset was generated, following these steps:

  1. Collect real wood block measures and texture.
  2. Using Blender software, create the object CAD model, importing its own real texture.
  3. Calibrate the camera intrinsic parameters, using some known pattern (I used a large chessboard).
  4. Collect several online background (materials) scenes.
  5. With the Wood Block CAD model + camera intrinsic parameters + background materials, generate the dataset, choosing the amount of different scenes, target objects and distractors (using BlenderProc).

This generated dataset is most-close to reality comparing to other previous synthetic datasets mentioned in this blog, since it considers the real camera intrinsic parameters and includes the real texture on the object CAD model. The dataset result can be summarized as follows:

210 000 images:

  • Synthetic images
  • wood block CAD model with real texture
     
  • 4200 different scenes
  • 50 images per scene
  • 1 image per scene without wood block (no target)
     
  • min. target objects/image: 1
  • max. target objects/image: 5
     
  • min. distractors/image: 2
  • max. distractors/image: 30
     
  • images generated with real camera intrinsic parameters

 

Here are some examples of this type of synthetic images:

          

 

 

 

 

Comments

Popular posts from this blog

Real-time UR10e following a tracked object

RGB-D tracking + UR10e following & picking/placing

UR10e control architecture