Template matching using Computer vision

Published on . Written by

Template Matching
Image processing is a growing field with a wide variety of applications. Almost everything we see around us now can and will be automated in the next few years. Such large-scale automation requires the manufacturing and implementation of a large number of sensors and other monitoring equipment. With respect to the fields of automation and robotics, one of the most important sensory equipment required is the visual sensors.

Read more..
These sensors require high-level AI-based systems to successfully replicate human vision. Hence the science which makes it possible will witness exponential growth in the next few years. Engineering students must, therefore, utilize this untapped potential, and to do so, must invest their time and effort in understanding how image processing works. In that light, here is a computer vision project that works on Template Matching to perform some basic image processing operations.


Skyfi Labs Projects
Project Description

Image Processing uses various techniques to input, segment, read, learn and analyze images. Template Matching is one such technique used in the process of Image Processing, which helps in matching and identifying small sections of an image. It serves several purposes in computer vision including quality control during manufacturing processes, navigation of autonomous systems and edge detection in images. In this OpenCV project, we will take a look at the various techniques and algorithms which are used for template matching in OpenCV, and how we can implement them and improve them.

Concepts Used

  • Algorithm Fundamentals
  • OpenCV Programming and implementation
  • Python Programming Fundamentals
  • Basics of Image Processing
  • Edge Detection Algorithms
  • Non-Rigid transformation Fundamentals
  • Occlusion Handling
Software and Hardware Requirements

  1. A suitable OS, such as Windows/MAC/Linux
  2. Python and OpenCV pre-installed
  3. Enough storage space
  4. Enough RAM
  5. Dual Core Processor
  6. Pre-installed libraries such as NumPy, argparse, imutils, glob and cv2.
Project Implementation

  • The main challenges we need to face while creating a template matching algorithm are scale changes, handling occlusions, differentiating between rigid and non-rigid transformations, clutter and environmental changes.
  • If you want to use a feature-based template matching technique, you need to make use of features such as shapes, colors, and textures to identify and match the target image with the input image.
  • This process requires the extraction of the above-mentioned features and makes use of advanced Neural Networking and Deep Learning classifiers.
  • Such classifiers pass the input image through various layers which collect information about the image. This information is then extracted from the network and serve as the parameters for template matching.
  • While this approach is more robust as it allows the matching of all kinds of non-rigid and transformed templates, it requires high-level programming language.
  • A template-based technique may be applied for input images which do not have very unique features. It may also be successfully implemented when the template image is a large part of the actual image.
  • This method requires the sampling of thousands of points.
  • To make processing easier, you can downsize the image and reduce its resolution, so that you have a lesser number of points to process.
  • There is a direct function on OpenCV known as cv2.matchTemplate which can be used to perform template matching. But this function is not very robust and does not return satisfactory outputs when the images are scaled.
  • While solving this problem traditionally requires the detection of the key data points, extraction of the values of the image at these points and the application of matching keypoints, there is an easier, more convenient way to solve this problem.
  • Just keep scaling the input image and making it smaller each time you loop. Simultaneously apply the template matching algorithm over the image and note how large the correlation coefficient turns out to be each time.
  • When you are done looping through the entire image, select the region in which you found your largest correlation coefficient as your “matched” region.
  • To do this, first import the following packages- NumPy, argparse, imutils, glob and cv2.
  • Next, use the argparse function to parse the arguments passed to the function using 3 switches, path to the template to be matched, the path to the input image and finally an argument that helps us visualise the search as we scale the image down.
  • Then, load the template onto the software, convert it to grayscale and start the edge detection process. Apply template matching only for edges at this point to boost your interface’s accuracy.
  • Now load the input image, convert it to grayscale, and make use of a counter variable to keep track of the area to be matched, even when the image scales down.
  • Begin looping the scales using the np.linspace function and scale all the way down to 20%.
  • While scaling, make sure you keep track of the ratio between original size and scaled size.
  • Now, match every scaled image to the original template and compare the correlation values that each loop gives.
  • Careful comparison and analysis of these values will help you find your best-matched region.
Kit required to develop Template matching using Computer vision:
Technologies you will learn by working on Template matching using Computer vision:


Any Questions?


Subscribe for more project ideas