Program of the Week 03 - Motion Detection Using OpenCV

So this week as promised last week I'll delve a bit into OpenCV. I've used OpenCV for object detection and facial recognition and detection in the past, but I would not like to go that deep into it for the time being. Today we'll take something much more simple, okay not as simple as opening up an image and changing some colors or say using grayscale or blurs and the like; I'll be doing a small program on motion detection using differences between frames and contours. Now contours are basically an outline that represent or bound a shape of an object. Contours will be used to pin point the motion happening in the image.

This is a fairly simple example where I'll be using my webcam to basically detect some motion in the video feed. Before we do that, we'll apply some transformations to the incoming video frames and use these to detect motion. There are several programs for this already online, but I'll do this one here as a start.

Code:

# Importing the required libraries
import cv2, time
import pandas as pd
from datetime import  datetime

# Initializing a static frame for comparison
static_init = None
# Create a list of motions detected, we'll add timestamps to it
motion_list = [None, None]

# Now this is a bit overkill, but it'll help us keep a track of stuff
# Creat a dataframe to record start and end times of movement from the video
df_motion = pd.DataFrame(columns=["Start_Time","End_Time"])

# Create the video capture object
capture = cv2.VideoCapture(0)

# Infinite while loop to go through the video frames for comparison
while (1):
    
    # Read the frame from the video
    check, frame = capture.read()
    
    # Initialize the motion to 0 - No Motion
    motion = 0
    
    # Convert BGR image (color) to Gray image (grayscale)
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
    
    # Adding Gaussian blur to detect motion easily
    blur = cv2.GaussianBlur(gray, (21,21), 0)
    
    # When video starts we assign the first frame to static_init
    if static_init is None:
        static_init = blur
        continue
    
    # Compute the difference between the static init frame and the current frame
    diff_frame = cv2.absdiff(static_init, blur)
    
    # Any differences in the frames are shown as white color. We will now find those
    # white spots in the frame which have threshold greater than 30
    threshold_frame = cv2.threshold(diff_frame, 30, 255, cv2.THRESH_BINARY)[1]
    threshold_frame = cv2.dilate(threshold_frame, None, iterations = 2)
    
    # Finding the contour of the moving objects (shapes)
    (_, contours, _) = cv2.findContours(threshold_frame.copy(),
                                        cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    
    for contour in contours:
        if cv2.contourArea(contour) < 10000:
            continue
        motion = 1
        
        (x,y,w,h) = cv2.boundingRect(contour)
        
        # Make a rect around the moving object
        cv2.rectangle(frame, (x,y), (x + w, y + h), (0,255,0), 3)
    
    # Add the motion to the list of movements
    
    motion_list.append(motion)
    motion_list = motion_list[-2:]
    
    # Appending Start time of motion 
    if motion_list[-1] == 1 and motion_list[-2] == 0: 
        time.append(datetime.now()) 
    
    # Appending End time of motion 
    if motion_list[-1] == 0 and motion_list[-2] == 1: 
        time.append(datetime.now()) 
    
    # Displaying image in gray_scale 
    cv2.imshow("Gray Frame", blur) 
    
    # Displaying the difference in currentframe to the very first frame
    cv2.imshow("Difference Frame", diff_frame) 
    
    # Displaying the black and white image in which,
    # if intensity difference greater than 30, it will appear white 
    cv2.imshow("Threshold Frame", threshold_frame) 
    
    # Displaying color frame with contour of motion of object 
    cv2.imshow("Color Frame", frame)
    
    k = cv2.waitKey(5) & 0xFF
    # Logic to break the loop if 'Esc' is pressed for 5ms
    if k == 27:
        if motion == 1:
            time.append(datetime.now())
        break
        
# Add all the timestamps of motion to dataframe
for i in range(0, len(time), 2): 
    df_motion = df_motion.append({"Start_Time":time[i], "End_Time":time[i + 1]}, 
                                    ignore_index = True)


# Cleanup - Very Important to close all windows and video stream
capture.release()
cv2.destroyAllWindows()


This piece of code is self explanatory! Just kidding. The initial process is to transform the frame. The frame that we get is in BGR format we convert it to grayscale. The reason being, we don't really need the colors in the frame and second that we need to find the difference between two frames. It would be easier and better if they were in grayscale. Next we apply Gaussian Blur to it, to reduce any noise in the frame. Depending upon the resolution of the video the kernel of the blur (21,21) here, should be adjusted. In the video you'll notice that the video looks a bit soft and some parts do become a bit blurry, and so the kernel should be appropriate.

After this initial transform, we now compare two frames that come one after the other. This gives us the difference frame. In the video you can see that the difference is very faint. While we can use this difference, if the object is very fast or there is motion blur in the video feed this will be very faint and can lead to some anomalies. Thats where cv2.threshold comes into the picture. This function, if the value of a pixel is above a certain threshold, assigns it a value of 1 (White in our case) else it applies a value of 0 (Black in our case). This function can only be applied to grayscale images. With this done, we apply cv2.dilate to the image to give a further boost to the white values. This often helps in connecting disconnected objects or transformations in our case. With that done, what we have in our threshold image is a set of contours that basically represent motion. We detect these contours using cv2.findContours() and then loop over these to draw bounding rectangles over them to recognize and higlight them in our final color frame video feed.

You can check out the video of the process below. If you have any questions or suggestions do post them in the comments below or on the video on YouTube. If you have suggestions for any programs or concepts that you would like me to cover, do comment here or reach out to me. Thanks for all the support so far. Next time I'll try to post my weekly articles on time. This time it got delayed because of the video.

Three down forty nine to go!

Comments

Popular posts from this blog

My Experiments with Pi (Part 2 of N)

My Experiments with Pi (Part 1 of N)

Program of the Week 01 - Common Day Finder