Real-time Face Detection with Image Capture and Beep Alert in Python


Overview

The program uses Python's OpenCV (cv2) module to detect faces in real time from the camera feed. It loads a pre-trained face detection classifier and reads each frame from the video feed, converting it to grayscale. It then uses the classifier to detect faces in the grayscale frame, playing an alert sound if any faces are detected. The program saves a snapshot of the frame with the detected face to a file. It also draws a green rectangle around the detected face in the frame and displays the resulting video frame on the screen. The program exits when the 'q' key is pressed, releasing the camera and closing the window.

Copy the Full Code ↓ or Download it From GitHub

Understand Code

import cv2
import playsound

These are the import statements for the necessary libraries: OpenCV (cv2) for image processing, and Playsound for playing an audio alert.


face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

This line loads the pre-trained face detection classifier from an XML file called haarcascade_frontalface_default.xml. This file contains the machine learning model used to detect faces in an image or video.


cap = cv2.VideoCapture(0)

This line creates a VideoCapture object cap and opens the default camera (index 0) to capture the video feed.


while True:
ret, frame = cap.read()

This starts an infinite loop that continuously captures video frames from the camera using the cap.read() method. The ret variable is a boolean that indicates if the video frame is successfully captured, and frame variable is a NumPy array that contains the video frame data.


gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

This line converts the color video frame to grayscale using the cv2.cvtColor() method. Grayscale images are easier to process and use less memory compared to color images.


faces = face_cascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30))

This line applies the face detection algorithm to the grayscale video frame using the detectMultiScale() method of the face detection classifier face_cascade. This method takes several parameters such as scaleFactor, minNeighbors, and minSize that affect the sensitivity and accuracy of face detection.


if len(faces) > 0:
    playsound.playsound('alert.wav')
    cv2.imwrite('detected/user.jpg', frame)

If the faces variable contains one or more detected faces, the program plays an audio alert and saves a snapshot of the video frame that contains the face. The audio alert is played using the playsound.playsound() method, and the snapshot is saved using the cv2.imwrite() method.


for (x, y, w, h) in faces:
    cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)

This line draws a rectangle around each detected face in the original color video frame using the cv2.rectangle() method. The rectangle is defined by the coordinates of the top-left and bottom-right corners of the bounding box.


cv2.imshow('Video', frame)

This line displays the original color video frame with the bounding boxes drawn around the detected faces using the 'cv2.imshow()' method.

if cv2.waitKey(1) == ord('q'):
    break

This line waits for a keyboard event and checks if the 'q' key is pressed. If the 'q' key is pressed, the program exits the infinite loop and proceeds to release the camera and close the window.

cap.release()
cv2.destroyAllWindows()

These lines release the camera resources and close the OpenCV window.

Copy the Full Code

import cv2
import playsound

# Load the pre-trained face detection classifier
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

# Open the default camera
cap = cv2.VideoCapture(0)

while True:
# Read the video frame
ret, frame = cap.read()

# Convert to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

# Detect faces in the frame
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30))

# Play a sound if a face is detected
if len(faces) > 0:
playsound.playsound('alert.wav')

# Save a snapshot of the frame
cv2.imwrite('detected/user.jpg', frame)

# Draw a rectangle around the detected faces
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)

# Show the video frame
cv2.imshow('Video', frame)

# Exit the program if the 'q' key is pressed
if cv2.waitKey(1) == ord('q'):
break

# Release the camera and close the window
cap.release()
cv2.destroyAllWindows()

Happy Coding...



Advertisement
Advertisement

Post a Comment

1 Comments

© Copyright 2024 & 2025 - Team Krope - All right reserved