banner



How To Quit Quit Camera After Some Mins In Python

Final updated on July 8, 2021.

animated_motion_02

That son of a bitch. I knew he took my last beer.

These are words a man should never, ever have to say. But I muttered them to myself in an exasperated sigh of disgust as I closed the door to my refrigerator.

You lot meet, I had but spent over 12 hours writing content for the upcoming PyImageSearch Gurus course. My brain was fried, practically leaking out my ears like one-half cooked scrambled eggs. And after calling it quits for the nighttime, all I wanted was to practice relax and watch my all-time favorite movie, Jurassic Park, while sipping an ice common cold Finestkind IPA from Smuttynose, a brewery I have become quite fond of as of late.

But that son of a bitch James had come over last night and drank my last beer.

Well, allegedly .

I couldn't actually evidence anything. In reality, I didn't really encounter him drinkable the beer as my confront was buried in my laptop, fingers floating above the keyboard, feverishly pounding out tutorials and articles. Just I had a feeling he was the culprit. He is my but (ex-)friend who drinks IPAs.

So I did what any man would do.

I mounted a Raspberry Pi to the elevation of my kitchen cabinets to automatically find if he tried to pull that beer stealing shit over again:

Figure 1: Don't steal my damn beer. Otherwise I'll mount a Raspberry Pi + camera on top of my kitchen cabinets and catch you.
Figure ane: Don't steal my damn beer. Otherwise I'll mountain a Raspberry Pi + camera on superlative of my kitchen cabinets and grab you.

Excessive?

Perhaps.

But I accept my beer seriously. And if James tries to steal my beer again, I'll take hold of him redhanded.

  • Update July 2021: Added new sections on alternative groundwork subtraction and motion detection algorithms we can use with OpenCV.

Looking for the source code to this post?

Leap Correct To The Downloads Section

A 2-part series on motion detection

This is the get-go postal service in a ii part serial on edifice a motion detection and tracking system for home surveillance.

The remainder of this article will detail how to build a basic motility detection and tracking system for dwelling surveillance using computer vision techniques. This example volition piece of work with both pre-recorded videos and live streams from your webcam; all the same, we'll be developing this system on our laptops/desktops.

In the second post in this series I'll show you how to update the lawmaking to piece of work with your Raspberry Pi and camera board — and how to extend your home surveillance system to capture any detected movement and upload it to your personal Dropbox.

And maybe at the end of all this we tin can grab James red handed…

A little bit well-nigh groundwork subtraction

Background subtraction is disquisitional in many reckoner vision applications. We use it to count the number of cars passing through a toll booth. We use it to count the number of people walking in and out of a shop.

And nosotros employ it for movement detection.

Earlier nosotros become started coding in this mail, let me say that there are many, many ways to perform motion detection, tracking, and assay in OpenCV. Some are very simple. And others are very complicated. The two principal methods are forms of Gaussian Mixture Model-based foreground and background segmentation:

  1. An improved adaptive background mixture model for real-fourth dimension tracking with shadow detection past KaewTraKulPong et al., bachelor through the cv2.BackgroundSubtractorMOG office.
  2. Improved adaptive Gaussian mixture model for groundwork subtraction by Zivkovic, and Efficient Adaptive Density Estimation per Prototype Pixel for the Job of Background Subtraction, also by Zivkovic, available through the cv2.BackgroundSubtractorMOG2 function.

And in newer versions of OpenCV we take Bayesian (probability) based foreground and background segmentation, implemented from Godbehere et al.'south 2012 newspaper, Visual Tracking of Human being Visitors under Variable-Lighting Weather for a Responsive Audio Art Installation. We can find this implementation in the cv2.createBackgroundSubtractorGMG function (we'll exist waiting for OpenCV 3 to fully play with this function though).

All of these methods are concerned with segmenting the background from the foreground (and they fifty-fifty provide mechanisms for us to discern between actual motion and just shadowing and small lighting changes)!

So why is this so important? And why do we care what pixels belong to the foreground and what pixels are part of the groundwork?

Well, in move detection, nosotros tend to make the following assumption:

The background of our video stream is largely static and unchanging over consecutive frames of a video. Therefore, if nosotros can model the background, nosotros monitor information technology for substantial changes. If in that location is a substantial modify, we can detect it — this change unremarkably corresponds to motility on our video.

Now obviously in the real-earth this assumption tin can easily fail. Due to shadowing, reflections, lighting atmospheric condition, and any other possible modify in the environment, our background tin look quite unlike in various frames of a video. And if the background appears to exist different, it can throw our algorithms off. That'due south why the most successful background subtraction/foreground detection systems use stock-still mounted cameras and in controlled lighting atmospheric condition.

The methods I mentioned above, while very powerful, are also computationally expensive. And since our stop goal is to deploy this system to a Raspberry Pi at the end of this 2 office series, it's best that we stick to uncomplicated approaches. We'll return to these more powerful methods in futurity blog posts, but for the fourth dimension existence we are going to go on it unproblematic and efficient.

In the rest of this blog mail service, I'm going to detail (arguably) the almost basic motion detection and tracking organisation you can build. It won't exist perfect, just it volition exist able to run on a Pi and even so deliver good results.

Bones motion detection and tracking with Python and OpenCV

Alright, are y'all ready to assistance me develop a home surveillance system to grab that beer stealing jackass?

Open up a editor, create a new file, proper noun it motion_detector.py , and let's get coding:

# import the necessary packages from imutils.video import VideoStream import argparse import datetime import imutils import fourth dimension import cv2  # construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-5", "--video", help="path to the video file") ap.add_argument("-a", "--min-area", blazon=int, default=500, assist="minimum surface area size") args = vars(ap.parse_args())  # if the video argument is None, and then we are reading from webcam if args.get("video", None) is None: 	vs = VideoStream(src=0).outset() 	time.slumber(2.0)  # otherwise, we are reading from a video file else: 	vs = cv2.VideoCapture(args["video"])  # initialize the first frame in the video stream firstFrame = None          

Lines 2-7 import our necessary packages. All of these should expect pretty familiar, except perhaps the imutils bundle, which is a prepare of convenience functions that I accept created to make basic image processing tasks easier. If yous do not already accept imutils installed on your system, you can install it via pip: pip install imutils .

Next up, we'll parse our control line arguments on Lines ten-13. Nosotros'll define two switches here. The first, --video , is optional. It simply defines a path to a pre-recorded video file that we can detect motion in. If y'all practice not supply a path to a video file, so OpenCV will utilize your webcam to discover motion.

We'll also ascertain --min-area , which is the minimum size (in pixels) for a region of an image to be considered actual "motion". As I'll discuss afterward in this tutorial, we'll ofttimes find small regions of an epitome that have changed substantially, likely due to noise or changes in lighting atmospheric condition. In reality, these small regions are non actual motion at all — and then we'll define a minimum size of a region to gainsay and filter out these false-positives.

Lines 16-22 handle grabbing a reference to our vs object. In the instance that a video file path is not supplied (Lines 16-xviii), we'll catch a reference to the webcam and wait for it to warm upwardly. And if a video file is supplied, then we'll create a pointer to it on Lines 21 and 22.

Lastly, nosotros'll finish this lawmaking snippet by defining a variable called firstFrame .

Any guesses every bit to what firstFrame is?

If you guessed that it stores the first frame of the video file/webcam stream, yous're right.

Assumption: The outset frame of our video file will contain no motion and simply groundwork — therefore, we tin model the background of our video stream using only the get-go frame of the video.

Obviously nosotros are making a pretty big assumption here. But again, our goal is to run this system on a Raspberry Pi, then we tin't get too complicated. And as y'all'll see in the results department of this post, we are able to hands detect motility while tracking a person as they walk effectually the room.

# loop over the frames of the video while True: 	# grab the current frame and initialize the occupied/unoccupied 	# text 	frame = vs.read() 	frame = frame if args.become("video", None) is None else frame[ane] 	text = "Unoccupied"  	# if the frame could not be grabbed, then nosotros have reached the end 	# of the video 	if frame is None: 		interruption  	# resize the frame, convert it to grayscale, and mistiness it 	frame = imutils.resize(frame, width=500) 	gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) 	gray = cv2.GaussianBlur(grey, (21, 21), 0)  	# if the first frame is None, initialize it 	if firstFrame is None: 		firstFrame = grey 		continue          

So now that we have a reference to our video file/webcam stream, we can offset looping over each of the frames on Line 28.

A telephone call to vs.read() on Line 31 returns a frame that nosotros ensure we are grabbing properly on Line 32.

We'll also define a string named text and initialize information technology to indicate that the room we are monitoring is "Unoccupied". If there is indeed activity in the room, nosotros can update this string.

And in the example that a frame is not successfully read from the video file, nosotros'll intermission from the loop on Lines 37 and 38.

Now we can start processing our frame and preparing information technology for motion analysis (Lines 41-43). We'll commencement resize information technology downwards to take a width of 500 pixels — there is no need to process the large, raw images straight from the video stream. We'll also convert the image to grayscale since color has no begetting on our motion detection algorithm. Finally, we'll utilize Gaussian blurring to polish our images.

It'south important to understand that even consecutive frames of a video stream will not be identical!

Due to tiny variations in the digital camera sensors, no two frames will be 100% the aforementioned — some pixels will most certainly accept dissimilar intensity values. That said, we need to account for this and apply Gaussian smoothing to boilerplate pixel intensities beyond an 21 x 21 region (Line 43). This helps smooth out high frequency noise that could throw our move detection algorithm off.

As I mentioned above, we need to model the background of our image somehow. Again, we'll make the supposition that the beginning frame of the video stream contains no move and is a good example of what our background looks similar. If the firstFrame is non initialized, nosotros'll store it for reference and proceed on to processing the next frame of the video stream (Lines 46-48).

Here's an instance of the first frame of an example video:

Figure 2: Example first frame of a video file. Notice how it's a still shot of the background, no motion is taking place.
Figure 2: Instance first frame of a video file. Notice how it's a still-shot of the background, no motion is taking place.

The above frame satisfies the assumption that the first frame of the video is simply the static background — no move is taking place.

Given this static groundwork epitome, we're now set up to actually perform motility detection and tracking:

            # compute the absolute difference between the current frame and 	# first frame 	frameDelta = cv2.absdiff(firstFrame, greyness) 	thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]  	# dilate the thresholded image to fill in holes, so find contours 	# on thresholded epitome 	thresh = cv2.dilate(thresh, None, iterations=2) 	cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, 		cv2.CHAIN_APPROX_SIMPLE) 	cnts = imutils.grab_contours(cnts)  	# loop over the contours 	for c in cnts: 		# if the profile is too small-scale, ignore it 		if cv2.contourArea(c) < args["min_area"]: 			continue  		# compute the bounding box for the contour, draw it on the frame, 		# and update the text 		(ten, y, w, h) = cv2.boundingRect(c) 		cv2.rectangle(frame, (x, y), (10 + w, y + h), (0, 255, 0), ii) 		text = "Occupied"          

Now that nosotros take our background modeled via the firstFrame variable, we can apply it to compute the divergence between the initial frame and subsequent new frames from the video stream.

Calculating the difference between two frames is a elementary subtraction, where we take the absolute value of their respective pixel intensity differences (Line 52):

delta = |background_model – current_frame|

An instance of a frame delta tin be seen below:

Figure 3: An example of the frame delta, the difference between the original first frame and the current frame.
Figure 3: An case of the frame delta, the divergence between the original offset frame and the electric current frame.

Notice how the background of the paradigm is clearly black. However, regions that contain motion (such as the region of myself walking through the room) is much lighter. This implies that larger frame deltas indicate that motion is taking place in the prototype.

Nosotros'll then threshold the frameDelta on Line 53 to reveal regions of the epitome that merely have significant changes in pixel intensity values. If the delta is less than 25, we discard the pixel and set up it to black (i.e. background). If the delta is greater than 25, we'll set it to white (i.e. foreground). An case of our thresholded delta image tin exist seen below:

Figure 4: Thresholding the frame delta image to segment the foreground from the background.
Figure 4: Thresholding the frame delta image to segment the foreground from the background.

Again, notation that the background of the image is black, whereas the foreground (and where the move is taking place) is white.

Given this thresholded image, it'south elementary to apply profile detection to to find the outlines of these white regions (Lines 58-sixty).

We start looping over each of the contours on Line 63, where we'll filter the minor, irrelevant contours on Line 65 and 66.

If the profile area is larger than our supplied --min-area , we'll depict the bounding box surrounding the foreground and motion region on Lines 70 and 71. We'll too update our text status string to indicate that the room is "Occupied".

            # draw the text and timestamp on the frame 	cv2.putText(frame, "Room Status: {}".format(text), (10, twenty), 		cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2) 	cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"), 		(10, frame.shape[0] - x), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), ane)  	# show the frame and tape if the user presses a cardinal 	cv2.imshow("Security Feed", frame) 	cv2.imshow("Thresh", thresh) 	cv2.imshow("Frame Delta", frameDelta) 	central = cv2.waitKey(1) & 0xFF  	# if the `q` key is pressed, suspension from the lop 	if key == ord("q"): 		break  # cleanup the camera and close any open windows vs.stop() if args.get("video", None) is None else vs.release() cv2.destroyAllWindows()          

The residue of this example simply wraps everything up. We draw the room condition on the paradigm in the top-left corner, followed past a timestamp (to make it feel like "real" security footage) on the lesser-left.

Lines 81-83 brandish the results of our piece of work, allowing the states to visualize if any movement was detected in our video, along with the frame delta and thresholded image so we can debug our script.

Note: If y'all download the lawmaking to this post and intend to use it to your own video files, you'll likely need to tune the values for cv2.threshold and the --min-surface area argument to obtain the best results for your lighting weather condition.

Finally, Lines 91 and 92 cleanup and release the video stream pointer.

Results

Obviously I desire to make sure that our move detection organisation is working before James, the beer stealer, pays me a visit again — nosotros'll save that for Part ii of this serial. To test out our motion detection system using Python and OpenCV, I have created ii video files.

The first, example_01.mp4 monitors the front door of my flat and detects when the door opens. The second, example_02.mp4 was captured using a Raspberry Pi mounted to my kitchen cabinets. It looks down on the kitchen and living room, detecting motion equally people move and walk around.

Let'south give our simple detector a try. Open up a terminal and execute the post-obit control:

$ python motion_detector.py --video videos/example_01.mp4          

Below is a .gif of a few still frames from the motility detection:

Figure 5: A few example frames of our motion detection system in Python and OpenCV in action.
Effigy 5: A few instance frames of our motility detection organisation in Python and OpenCV in activeness.

Notice how that no movement is detected until the door opens — then we are able to notice myself walking through the door. You can see the full video here:

Now, what near when I mount the photographic camera such that it'south looking downwardly on the kitchen and living room? Let'due south find out. Just issue the following command:

$ python motion_detector.py --video videos/example_02.mp4          

A sampling of the results from the second video file can exist seen below:

animated_motion_02
Figure vi: Again, our motion detection system is able to track a person equally they walk around a room.

And again, here is the full vide of our motion detection results:

Then equally you can run across, our motion detection organization is performing fairly well despite how simplistic it is! We are able to detect as I am entering and leaving a room without a trouble.

However, to exist realistic, the results are far from perfect. Nosotros get multiple bounding boxes even though there is just one person moving around the room — this is far from ideal. And we can conspicuously run across that small changes to the lighting, such as shadows and reflections on the wall, trigger fake-positive move detections.

To gainsay this, we tin lean on the more than powerful background subtractions methods in OpenCV which can actually account for shadowing and small amounts of reflection (I'll be covering the more advanced background subtraction/foreground detection methods in future web log posts).

But for the concurrently, consider our cease goal.

This organization, while adult on our laptop/desktop systems, is meant to be deployed to a Raspberry Pi where the computational resources are very limited. Because of this, nosotros demand to continue our motility detection methods unproblematic and fast. An unfortunate downside to this is that our motion detection system is not perfect, but it still does a fairly good task for this particular project.

Finally, if you want to perform movement detection on your ain raw video stream from your webcam, only leave off the --video switch:

$ python motion_detector.py

Alternative move detection algorithms in OpenCV

The motion detection algorithm we implemented hither today, while uncomplicated, is unfortunately very sensitive to any changes in the input frames.

This is primarily due to the fact that we are grabbing the very commencement frame from our camera sensor, treating it as our background, and then comparing the background to every subsequent frame, looking for any changes. If a change is detected, we record it every bit motility.

However, this method tin quickly fall apart if you lot are working with varying lighting atmospheric condition.

For case, suppose you are monitoring the garage outside your business firm for intruders. Since your garage is outside, lighting conditions will change due to pelting, clouds, the movement of the sunday, night, etc.

If you were to choose a single static frame and care for it as your background in such a condition, then information technology's likely that inside hours (and maybe even minutes, depending on the situation) that the brightness of the entire outdoor scene would change, and thus cause false-positive motion detections.

The style you become effectually this problem is to maintain a rolling average of the past North frames and treat this "averaged frame" as your background. You then compare the averaged set of frames to the current frame, looking for substantial differences.

The following tutorial volition teach y'all how to implement the method I only discussed.

Alternatively, OpenCV implements a number of background subtraction algorithms that you can utilize:

  • OpenCV: How to Use Background Subtraction Methods
  • Groundwork Subtraction with OpenCV and BGS Libraries

What'southward adjacent? I recommend PyImageSearch University.

The #1 OpenCV, Object Detection, and Deep Learning Program

Inside thirty days, you lot will be 100% sure that you lot take the training and tools required to accomplish your #1 goal. If non, we'll refund your purchase.

★★★★★ iv.84 (128 Ratings) • thirteen,800+ Students Enrolled

Nosotros strongly believe that if you had the correct teacher you could master calculator vision and deep learning.

Do yous call up learning computer vision and deep learning has to be time-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer scientific discipline?

That'south not the case.

Learn More

Summary

In this blog post we found out that my friend James is a beer stealer. What an asshole.

And in social club to grab him carmine handed, we have decided to build a motion detection and tracking system using Python and OpenCV. While bones, this organisation is capable of taking video streams and analyzing them for motility while obtaining fairly reasonable results given the limitations of the method we utilized.

The cease goal if this system is to deploy it to a Raspberry Pi, and then we did not leverage some of the more than advanced background subtraction methods in OpenCV. Instead, we relied on a simple yet reasonably effective assumption — that the outset frame of our video stream contains the background we want to model and nothing more than.

Under this assumption we were able to perform groundwork subtraction, observe motion in our images, and draw a bounding box surrounding the region of the image that contains motion.

In the second role of this series on motion detection, we'll be updating this lawmaking to run on the Raspberry Pi.

Nosotros'll also be integrating with the Dropbox API, assuasive the states to monitor our home surveillance arrangement and receive real-time updates whenever our organisation detects motility.

Stay tuned!

Download the Source Code and Free 17-page Resources Guide

Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you lot'll find my hand-picked tutorials, books, courses, and libraries to assist you master CV and DL!

Source: https://pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/

Posted by: williamsteres1992.blogspot.com

0 Response to "How To Quit Quit Camera After Some Mins In Python"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel