OpenCV Pattern Matching (Template Matching)
Implement template matching techniques step by step to automatically locate specific component positions in PCB board images in a factory automation environment.
Problem
Required Tools
Programming language for writing image processing scripts
Computer vision library providing template matching, image transformation, and visualization features
Multidimensional array computation library for efficient image pixel data processing
Solution Steps
Install OpenCV and Dependencies
Install OpenCV and NumPy via pip. opencv-python-headless is suitable for server environments without a GUI. For local development environments, install opencv-python to use GUI functions like imshow.
# Server environment (no GUI)
pip install opencv-python-headless numpy
# Local development environment (with GUI)
pip install opencv-python numpy
# Verify installation
python -c "import cv2; print(cv2.__version__)"Prepare Source and Template Images
Matching accuracy heavily depends on the quality of the template image. The template is an image that has been precisely cropped to contain only the component region you want to find. Converting the source and template images to grayscale speeds up computation and makes matching more robust to lighting changes. Processing just 1 channel instead of 3 color channels also reduces memory usage by two-thirds.
import cv2
import numpy as np
# Load source image (entire PCB board photo)
source = cv2.imread('pcb_board.png')
source_gray = cv2.cvtColor(source, cv2.COLOR_BGR2GRAY)
# Load template image (the component to find)
template = cv2.imread('ic_chip_template.png')
template_gray = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
# Check template dimensions
h, w = template_gray.shape[:2]
print(f"Template size: {w}x{h} pixels")Perform Matching with matchTemplate
cv2.matchTemplate slides the template over the source image and computes the similarity at each position. The meaning of the result varies depending on the matching method: - TM_CCOEFF_NORMED: Normalized correlation coefficient. Values closer to 1 indicate a match (most commonly used) - TM_SQDIFF_NORMED: Normalized squared difference. Values closer to 0 indicate a match - TM_CCORR_NORMED: Normalized cross-correlation. Sensitive to brightness changes, less used in practice TM_CCOEFF_NORMED is most robust to lighting changes and brightness differences, making it recommended for industrial inspection.
# Perform template matching (normalized correlation coefficient method)
result = cv2.matchTemplate(source_gray, template_gray, cv2.TM_CCOEFF_NORMED)
# Result matrix size: (source_h - template_h + 1, source_w - template_w + 1)
print(f"Result matrix shape: {result.shape}")
print(f"Max similarity: {result.max():.4f}")
print(f"Min similarity: {result.min():.4f}")
# Find the best match location
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result)
# For TM_CCOEFF_NORMED, max_loc is the best match position
best_top_left = max_loc
best_bottom_right = (best_top_left[0] + w, best_top_left[1] + h)
print(f"Best match coordinates: {best_top_left} ~ {best_bottom_right}")
print(f"Match confidence: {max_val:.4f}")Filter Match Locations by Threshold (Multiple Matches)
There may be multiple identical components in a single image. Set a threshold to extract all locations where the similarity meets or exceeds the criterion. The threshold is typically set between 0.8 and 0.95; higher values are more precise but may cause missed detections. Apply NMS (Non-Maximum Suppression) or minimum distance filtering to remove overlapping detections.
# Set threshold (0.0 ~ 1.0)
THRESHOLD = 0.85
# Extract all coordinates above the threshold
locations = np.where(result >= THRESHOLD)
matches = list(zip(*locations[::-1])) # Convert to (x, y) format
print(f"Matches above threshold {THRESHOLD}: {len(matches)}")
# Remove overlapping matches (minimum distance based)
def remove_duplicates(points, min_dist=20):
"""Merge nearby coordinates into one."""
if not points:
return []
filtered = [points[0]]
for pt in points[1:]:
if all(abs(pt[0] - f[0]) > min_dist or abs(pt[1] - f[1]) > min_dist for f in filtered):
filtered.append(pt)
return filtered
unique_matches = remove_duplicates(matches, min_dist=w // 2)
print(f"Matches after deduplication: {len(unique_matches)}")Extract Result Coordinates and Visualize
Draw rectangles on the source image at matched coordinates for visual verification. Outputting the center coordinates and confidence of each match enables subsequent quality inspection logic. Results can be saved to a file or exported as CSV for integration with MES (Manufacturing Execution System).
# Visualize results
output = source.copy()
results_data = []
for i, pt in enumerate(unique_matches):
x, y = pt
confidence = result[y, x]
center_x = x + w // 2
center_y = y + h // 2
# Draw rectangle
cv2.rectangle(output, (x, y), (x + w, y + h), (0, 255, 0), 2)
# Display confidence text
cv2.putText(output, f"{confidence:.2f}", (x, y - 5),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 1)
results_data.append({
'index': i + 1,
'center_x': center_x,
'center_y': center_y,
'confidence': round(float(confidence), 4),
})
print(f"Component #{i+1}: center({center_x}, {center_y}), confidence={confidence:.4f}")
# Save result image
cv2.imwrite('matching_result.png', output)
# Save results as CSV
import csv
with open('matching_result.csv', 'w', newline='') as f:
writer = csv.DictWriter(f, fieldnames=['index', 'center_x', 'center_y', 'confidence'])
writer.writeheader()
writer.writerows(results_data)
print(f"\nTotal {len(results_data)} components detected")Core Code
Reusable template matching function - Complete example with threshold-based multi-matching and duplicate removal
import cv2
import numpy as np
def find_template(source_path, template_path, threshold=0.85, method=cv2.TM_CCOEFF_NORMED):
"""Find all locations in an image that match the template.
Args:
source_path: Path to the source image
template_path: Path to the template image
threshold: Matching threshold (0.0 ~ 1.0)
method: Matching algorithm (default: TM_CCOEFF_NORMED)
Returns:
list of dict: List of match locations and confidence values
"""
source = cv2.imread(source_path, cv2.IMREAD_GRAYSCALE)
template = cv2.imread(template_path, cv2.IMREAD_GRAYSCALE)
if source is None or template is None:
raise FileNotFoundError("Unable to load image file.")
h, w = template.shape[:2]
result = cv2.matchTemplate(source, template, method)
# Extract coordinates above the threshold
locations = np.where(result >= threshold)
matches = list(zip(*locations[::-1]))
# Remove duplicates (simplified NMS)
filtered = []
for pt in sorted(matches, key=lambda p: -result[p[1], p[0]]):
if all(abs(pt[0] - f[0]) > w // 2 or abs(pt[1] - f[1]) > h // 2 for f in filtered):
filtered.append(pt)
return [{
'x': int(pt[0]),
'y': int(pt[1]),
'center_x': int(pt[0] + w // 2),
'center_y': int(pt[1] + h // 2),
'width': w,
'height': h,
'confidence': float(result[pt[1], pt[0]]),
} for pt in filtered]
# Usage example
if __name__ == '__main__':
matches = find_template('pcb_board.png', 'ic_chip_template.png', threshold=0.85)
for m in matches:
print(f"Position: ({m['center_x']}, {m['center_y']}), Confidence: {m['confidence']:.4f}")Common Mistakes
Ignoring scale (size) differences
matchTemplate only works at the same scale. If the template and source have different scales, use cv2.resize to match their sizes, or construct a pyramid at multiple scales to perform multi-scale matching.
Applying directly to rotated images
matchTemplate is not rotation-invariant. To find rotated components, either rotate the template at multiple angles and match each one, or use feature-point-based matching algorithms like SIFT/ORB.
Skipping grayscale conversion
Matching with color images directly results in 3-channel computation that is 3x slower and more sensitive to lighting color temperature differences. Always convert with cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) before matching.
Setting the threshold too low
A threshold of 0.7 or below causes a sharp increase in false positives. Start at 0.85 or higher and find the balance between recall and precision. You can plot an ROC curve to determine the optimal threshold.