Corner Detection
Learn to detect corners from images.
We'll cover the following
Corners are one of the most essential features in an image. Corners are points in an image where the direction of the intensity gradient changes abruptly. This means that the gradient vectors around a corner point have different orientations. These regions are often distinctive and can be used as landmarks or reference points for various computer vision tasks. Corner detection is very important for patch mapping, which is used to produce texture maps for geometric models of real-world objects.
We’ll use Harris corner detection to detect corners in this lesson. This algorithm, created by Chris Harris and Mike Stephens, works by taking the horizontal and vertical derivatives of image pixel values and looking for areas where these values are high.
But first, we need to convert our image to grayscale. As mentioned previously, this helps in simplifying algorithms and eliminates the complexity of the computational requirements of the Harris corner detection algorithm.
cv::cvtColor(img, src_gray, COLOR_BGR2GRAY);
Corner detection
For corner detection, we use the cornerHarris()
function of the OpenCV library. The syntax of the code is as follows:
cv::cornerHarris(gray, output, blockSize, apertureSize, k);
This function requires five parameters:
gray
is the input image in which corners are to be detected. It should be a single channel 8-bit or floating-point image.output
is the output image that will contain the corner strength values for each pixel. The image should be a single channel 32-bit floating-point image.blockSize
is the size of the neighborhood used for corner detection. It’s typically set to a small odd value, such as3
or5
.apertureSize
is the aperture parameter for the Sobel operator, which is used to calculate the gradient of the image. It’s typically set to3
.k
is the Harris corner detector free parameter, which is used to adjust the sensitivity of the detector. Larger values will result in fewer corners being detected.
The function applies the Harris corner detection algorithm to the input image, and the resulting corner strength values are stored in the output image. The higher the corner strength value, the more likely it is that the pixel is a corner. The output of the Harris detector is not thresholded; it’s just a 32-bit float matrix with the corner strength at each pixel. After this, we use normalize()
and convertScaleAbs()
to get a thresholded image.
The normalize()
function
After applying the cornerHarris()
function, the normalize()
function is applied to the corner strength image to scale the corner strength values between 0 and 255.
The syntax of the function is as follows:
cv::normalize(output, output_norm, 0, 255, NORM_MINMAX, CV_32FC1, Mat());
It has seven arguments:
output
is the input image.output_norm
is the output image.0
is the lower bound of the output.255
is the upper bound of the output.NORM_MINMAX
is the normalization type.CV_32FC1
is the output image type.Mat()
is the mask.
After applying the normalize()
function, the convertScaleAbs()
function is applied to the corner strength image to get an 8-bit image containing the corners. The syntax of the function is:
convertScaleAbs(output_norm, output_norm_scaled);
It has two arguments:
output_norm
is the input image.output_norm_scaled
is the output image.
Now we’ve received a thresholded image that can be used to compare with a certain maximum pixel intensity to figure out the corner points.
Pointing to corners
First, we use the nested for
loop to iterate through each pixel in the image. Then we compare the pixel to a certain value to find the corner. The syntax for this step is given below:
Get hands-on with 1400+ tech skills courses.