Monday, January 31, 2011

cv::DescriptorMatcher

There are currently two implementations of DescriptorMatcher. One is 'brute-force' and the other one is 'flann'.

Earlier posts on FLANN here and here.

Following its name, the DescriptorMatcher focuses on matching the descriptors provided by the caller. The train() member will only be relevant FLANN matcher. It when the index trees are built.

One-to-One image registration Sample Program ( matcher_simple ):

SURFDescriptorExtractor is used to build the SURF descriptors from input key-points. Interestingly, the default parameters defining the scaling properties are different for SURF descriptor-extractor and SURF feature-detectors. Always thought that the octaves are used for building filter pyramids (LoG) during key-point detection. Does it actually get used in building descriptors?!

Using the provided image pair: box.png and box_in_scene.png
  • Turns out that filtering some correspondences is necessary. By default it generates tons of matches. Could focus on strong matches by first doing 2-NN search (K=2). Eliminate those matches whose NN are too similar. This idea is implemented in the find_obj C sample.
  • Use overloaded function DrawMatches() that supports K matches for each feature point with vector<vector<DMatch>>. DescriptorMatcher::knnMatch() provided exactly the result type. Only put a line between 'strong correspondences' by setting up a mask-of-matches to DrawMatches().
One-to-Many image registration Sample Program (matching_to_many_images):
  • The matching pipeline: FeatureDectector, DescriptorExtractor, MatcherDescriptor.
  • All 3 abstract classes supplies static member "::create( name )" to instantiating a specific implementation class.
  • Simply pass in a vector of Mat to supply all the (training) images to match from.
Results/Observations (using the provided train set (1,2,3.png) and query image(query.png)
Some points along the white table edges are extracted. They often get matched (false positives).

Results/Observations (using the set of Ground Truth images from Dundee University)
  • There are 4 query images and 2 sets of training images of 10 each.
  • There is a provided ground-truth file for verification.
  • It's here: http://www.computing.dundee.ac.uk/staff/jessehoey/teaching/vision/project1.html
  • With 10126 train-descriptors (over 10 training images), and 1148 matches, FLANN takes about 1 second to build tree(0.6) and match(0.4). BruteForce-L1 takes 44 seconds. The almost matching time taken by FLANN is so short that the upstream processes now becomes the bottleneck.
  • Overall quality of matching of 'book1' is not good. Plenty of false matches.
  • Ball matching is better, it has more distinctive lines.
  • The mobile phone keypad is matching away a lot of label characters on the juice-box.
  • The lack of distinctive features on 'kit' is making it very difficult to match. Switched to MSER detector does not help. Only tried with default parameters.

1 comment:

  1. I am trying to do the same thing i.e match an image from a set of 100 images. For this I used the SURF and ORB (in different programs one by one) to fetch key-points and descriptors. For matching I used the BruteForce (NORM:L1 and NORM:HAMMING respectively with knnMatch =2). I made a loop to go through whole image one by one and then found the perfect match. it is taking Approx 10 seconds.

    Firstly I fetched and stored in a file of .yml extension and later on I read the file using File Storage and then make a loop accordingly number of count of images to match.

    Can u suggest me what I am doing wrong and what I have to do to Improve the response time of match result.

    ReplyDelete