Truck drivers, who haul goods and heavy loads long distances during the day and night, often suffer from sleep deprivation. Fatigue and drowsiness are some of the leading causes of major accidents on the highway. The auto industry is working on technologies that can detect drowsiness and alert drivers.

In this project we will use Raspberry Pi, OpenCV and Pi Camera Module to build a sleep sensing and alert system for the driver. The basic purpose of the system is to track the driver’s facial condition and eye movements, and if the driver feels drowsy, the system will trigger a warning message. This is an extension of our previous facial landmark detection and face recognition applications.

required components

hardware components

Raspberry Pi 3

Pi Camera Module

Micro USB data cable

buzzer

Software and Online Services

Open CV

database

Python3

Before proceeding with this driver sleepiness detection project, first we need to install OpenCV, imuTIls, dlib, Numpy and some other dependencies in this project. OpenCV is used here for digital image processing. The most common applications of digital image processing are object detection, face recognition, and people counting.

Here we only use Raspberry Pi, Pi Camera and buzzer to build this sleep detection system.

Install OpenCV on Raspberry Pi

The Raspberry Pi needs to be fully updated before installing OpenCV and other dependencies. Update the Raspberry Pi to its latest version with the following command:

sudo apt-get update

Then install the dependencies required to install OpenCV on the Raspberry Pi using the following command.

sudo apt-get install libhdf5-dev -y sudo apt-get install libhdf5-serial-dev –y sudo apt-get install libatlas-base-dev –y sudo apt-get install libjasper-dev -y sudo apt-get install libqtgui4 –y sudo apt-get install libqt4-test –y

Finally, install OpenCV on Raspberry Pi using the following command.

pip3 install opencv-contrib-python==4.1.0.25

Install other required packages

Before programming the sleepiness detector on the Raspberry Pi, let’s install other required packages.

Install dlib: dlib is a modern toolkit containing machine learning algorithms and tools for solving real-world problems. Install dlib using the following command.

pip3 install-dlib

Install NumPy: NumPy is the core library of scientific computing, which contains powerful n-dimensional array objects and provides tools for integrating C, C++, etc.

pip3 install numpy

Install the face_recogniTIon module: a library for recognizing and manipulating human faces from Python or the command line. Install the face recognition library using the following command.

Pip3 install face_recogniTIon

Finally, install the eye_game library with the following command:

pip3 install eye game

Programming the Raspberry Pi

The full code for Driver Drrowsiness Detector using OpenCV is given at the end of the page. Here, we will explain some important parts of the code for better understanding.

So, as usual, start the code by including all required libraries.

Import face recognition import resume2 import numpy as np import time import resume2 import RPi.GPIO import eye_game as GPIO

After that, create an instance to get the video feed from the pi camera. If you are using multiple cameras, replace zeros with ones in the cv2.VideoCapture(0) function.

video_capture = cv2.VideoCapture(0)

Now in the next few lines, enter the filename and path of the file. In my case both code and files are in the same folder. Then use face coding to get the face position in the picture.

img_image = face_recogniTIon.load_image_file("img.jpg")img_face_encoding = face_recognition.face_encodings(img_image)[0]

Then create two arrays to hold the faces and their names. I’m only using one image; you can add more images and their paths in the code.

known_face_encodings = [    img_face_encoding ]known_face_names = [    “阿什”]

Then create some variables to store the location of the face parts, face name and code.

face_locations = []face_encodings = []面名 = []process_this_frame = True

In the while function, video frames are captured from the stream and the frames are resized to a smaller size, and the captured frames are converted to RGB colors for face recognition.

ret, frame = video_capture.read() small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25) rgb_small_frame = small_frame[:, :, ::-1]

After that, a face recognition process is run to compare the faces in the video to the images. And get the position of the face part.

如果 process_this_frame:        face_locations = face_recognition.face_locations(rgb_small_frame)        face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)        cv2.imwrite(文件,small_frame)

If the recognized face matches a face in the image, the eyegame function is called to track eye movement. The code repeatedly tracks the eyeballs and their positions.

face_distances = face_recognition.face_distance(known_face_encodings, face_encoding) best_match_index = np.argmin(face_distances) if match[best_match_index]: name = known_face_names[best_match_index] direction = eye_game.get_eyeball_direction(file) print(direction)

If the code doesn’t detect any eye movement for 10 seconds, it triggers an alarm to wake the person up.

else: count=1+countprint(count) if(count>=10): GPIO.out(buzzer, GPIO.HIGH) time.sleep(2) GPIO.out(buzzer, GPIO.LOW) print("Alert! Alert! Driver drowsiness detected")

Then use OpenCV functions to draw a rectangle around the face and place text on it. Also, video frames are displayed using the cv2.imshow function.

cv2.rectangle(frame, (left, top), (right, bottom), (0, 255, 0), 2) cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0 , 255, 0), cv2.FILLED) font = cv2.FONT_HERSHEY_DUPLEX cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (0, 0, 255), 1) cv2.imshow ('video', frame) sets the key 'S' to stop the code. if cv2.waitKey(1) & 0xFF == ord('s'): break

Test Driver Drowsiness Detection System

Once the code is ready, connect the Pi camera and buzzer to the Raspberry Pi and run the code. After about 10 seconds, a window will appear with a live stream from your Raspberry Pi camera. When the device recognizes a face, it prints your name on the frame and starts tracking eye movements. Now test the alarm by closing your eyes for 7 to 8 seconds. When the count exceeds 10, it triggers an alarm to alert you of the situation.


 

Import face recognition
Import resume 2
import numpy as np
import time
Import resume 2
import eye_game
Import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setWarnings(false)
buzzer = 23
GPIO.setup(buzzer, GPIO.OUT)
previous=”unknown”
count=0
video_capture = cv2.VideoCapture(0)
#frame = (video_capture, file)
file = ‘image_data/image.jpg’
# Load an example image and learn how to recognize it.
img_image = face_recognition.load_image_file(“img.jpg”)
img_face_encoding = face_recognition.face_encodings(img_image)[0]
# Create an array of known face codes and their names
known_face_encodings = [
img_face_encoding
]

known_face_names = [
“Ash”
]

# Initialize some variables
face_locations = []
face_encodings = []
FaceName = []
process_this_frame = True
And true:
# Grab a single frame of video
ret, frame = video_capture.read()
# Resize video frame to 1/4 size to speed up face recognition processing
small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
# Convert image from BGR color (used by OpenCV) to RGB color (used by face_recognition)
rgb_small_frame = small_frame[:, :, ::-1]
# Only process every other frame of video to save time
if process_this_frame:
# Find all faces and face codes in the current video frame
face_locations = face_recognition.face_locations(rgb_small_frame)
face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
cv2.imwrite(file, small_frame)
FaceName = []
For face_encoding in face_encodings:
# Check to see if the face matches a known face
匹配 = face_recognition.compare_faces(known_face_encodings, face_encoding)
name=”unknown”
face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
best_match_index = np.argmin(face_distances)
If match [best_match_index]:
name = known_face_names[best_match_index]
direction = eye_game.get_eyeball_direction(file)
print(direction)
#eye_game.api.get_eyeball_direction(cv_image_array)
If the previous one! = Direction:
previous = direction
Other:
print(“Same old”)
count=1+count
print(count)
if (count >= 10):
GPIO.out(buzzer, GPIO.HIGH)
time.sleep(2)
GPIO.out(buzzer, GPIO.LOW)
print(“Alert! Alert! Driver drowsiness detected”)
cv2.putText(frame, “Drowsiness alarm!”, (10, 30),

cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
face_names.append(name)
process_this_frame = 不是 process_this_frame
# show result
For (top, right, bottom, left), names in zip (face_locations, face_names):
# Scale the face position, because our detected frame is scaled to 1/4 size
top*=4
pair*=4
bottom*=4
left *= 4
# Draw a box around the face
cv2.rectangle(frame, (left, top), (right, bottom), (0, 255, 0), 2)
# draw a label with the name under the face
cv2.rectangle(frame, (left, bottom – 35), (right, bottom), (0, 0, 255), cv2.FILLED)
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame, name, (left + 6, bottom – 6), font, 1.0, (0, 0, 255), 1)
#cv2.putText(frame, frame_string, (left + 10, top – 10), font, 1.0, (255, 255, 255), 1)
# display the resulting image
cv2.imshow(‘video’, frame)
# Press “q” on your keyboard to quit!
if cv2.waitKey(1) & 0xFF == ord(‘q’):
rest
# Release the handle to the webcam
video_capture.release()
cv2.destroyAllWindows()

Leave a Reply

Your email address will not be published.