Drowsines, blind spot and attention monitor for driving or handling heavy machinery. Also detects objects at the blind spot via Computer Vision powered by Pytorch and the Jetson Nano. And has a crash detection feature.
- DBSE-monitor
- Table of contents
- Introduction:
- Solution:
- Materials:
- Connection Diagram:
- Laptop Test:
- Summary and mini demos:
- Jetson Nano Setup:
- The Final Product:
- Commentary:
We will be tackling the problem of drowsiness when handling or performing tasks such as driving or handling heavy machinery and the blind spot when driving.
The Center for Disease Control and Prevention (CDC) says that 35% of American drivers sleep less than the recommended minimum of seven hours a day. It mainly affects attention when performing any task and in the long term, it can affect health permanently.
According to a report by the WHO (World Health Organization) (2), falling asleep while driving is one of the leading causes of traffic accidents. Up to 24% of accidents are caused by falling asleep, and according to the DMV USA (Department of Motor Vehicles) (3) and NHTSA (National Highway traffic safety administration) (4), 20% of accidents are related to drowsiness, being at the same level as accidents due to alcohol consumption with sometimes even worse consequences than those.
Also, the NHTSA mentions that being angry or in an altered state of mind can lead to more dangerous and aggressive driving (5), endangering the life of the driver due to these psychological disorders.
We will create a system that will be able to detect a person's drowsiness level, this with the aim of notifying the user about his state and if he is able to drive.
At the same time it will measure the driver’s attention or capacity to garner attention and if he is falling asleep while driving. If it positively detects that state (that he is getting drowsy), a powerful alarm will sound with the objective of waking the driver.
Additionally it will detect small vehicles and motorcycles in the automobile’s blind spots.
In turn, the system will have an accelerometer to generate a call to the emergency services if the car had an accident to be able to attend the emergency quickly.
Because an altered psychological state could generate a possible dangerous driving, we take care of the state of the driver by analyzing the emotions of his face and using music that can generate a positive response to the driver.
Current Solutions:
-
Mercedes-Benz Attention Assist uses the car's engine control unit to monitor changes in steering and other driving habits and alerts the driver accordingly.
-
Lexus placed a camera in the dashboard that tracks the driver's face, rather than the vehicle's behavior, and alerts the driver if his or her movements seem to indicate sleep.
-
Volvo's Driver Alert Control is a lane-departure system that monitors and corrects the vehicle's position on the road, then alerts the driver if it detects any drifting between lanes.
-
Saab uses two cameras in the cockpit to monitor the driver's eye movement and alerts the driver with a text message in the dash, followed by a stern audio message if he or she still seems sleepy.
As you can see these are all premium brands and there is not a single plug and play system that can work for every car. This, is our opportunity as most cars in the road are not on that price range and do not have these systems.
Hardware:
- NVIDIA Jetson Nano. x1. https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/
- Power Inverter for car. https://www.amazon.com/s?k=power+inverter+truper&ref=nb_sb_noss_2
- ESP32. https://www.adafruit.com/product/3405
- OLED display. https://www.amazon.com/dp/B072Q2X2LL/ref=cm_sw_em_r_mt_dp_U_TMGqEb9YAGJ5Q
- Any Bluetooth Speaker or Bluetooth Audio Car System. x1. https://www.amazon.com/s?k=speaker&s=price-asc-rank&page=2&qid=1581202023&ref=sr_pg_2
- USB TP-Link USB Wifi Adapter TL-WN725N. x1. https://www.amazon.com/dp/B008IFXQFU/ref=cm_sw_em_r_mt_dp_U_jNukEbCWXT0E4
- UGREEN USB Bluetooth 4.0 Adapter x1. https://www.amazon.com/dp/B01LX6HISL/ref=cm_sw_em_r_mt_dp_U_iK-BEbFBQ76BW
- HD webcam . x1. https://canyon.eu/product/cne-cwc2/
- 32 GB MicroSD Card. x1. https://www.amazon.com/dp/B06XWN9Q99/ref=cm_sw_em_r_mt_dp_U_XTllEbK0VKMAZ
- 5V-4A AC/DC Adapter Power Supply Jack Connector. x1. https://www.amazon.com/dp/B0194B80NY/ref=cm_sw_em_r_mt_dp_U_ISukEbJN7ABK3
- VMA204. x1. https://www.velleman.eu/products/view?id=435512
Software:
- Pytorch: https://pytorch.org/
- JetPack 4.3: https://developer.nvidia.com/jetson-nano-sd-card-image-r3231
- YOLOv3: https://pjreddie.com/darknet/yolo/
- OpenCV: https://opencv.org/
- Twilio: https://www.twilio.com/
- Arduino IDE: https://www.arduino.cc/en/Main/Software
- Mosquitto MQTT: https://mosquitto.org/
This is the connection diagram of the system:
To test the code on a computer, the first step will be to have a python environments manager, such as Python Anaconda.
https://www.anaconda.com/distribution/
First we will create a suitable enviroment for pytorch.
conda create --name pytorch
To activate the enviroment run the following command:
activate pytorch
In the case of Anaconda the PyTorch page has a small widget that allows you to customize the PyTorch installation code according to the operating system and the python environment manager, in my case the configuration is as follows.
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
The other packages we need are the following:
pip install opencv-python matplotlib tqdm python-vlc Pillow
Anyway we attach the file requirements.txt where all packages come in our environment.
Inside the Drowsiness, Emotion detection and YoloV3 folders, you will find a file "Notebook.ipynb" which contains the code to run the programs in jupyter notebook, however I attach in each folder a file called "notebook.py" with the code in format **. py **.
conda install -c conda-forge notebook
Command to start jupyter notebook
jupyter notebook
All the demos that we are going to show are executed from a jupyter notebook and are focused on showing the functionality of the AI models, the demo with the hardware is shown at the end of the repository. Demo
The function of this model is to make a detection of distraction or closed eyes of the driver for more than 2 seconds or he is distracted from the road (for example, looking at the cell phone).
Details: https://github.com/altaga/DBSE-monitor/blob/master/Drowsiness
The function of this model is to detect objects that are less than 3 meters from the car at the blind spot.
Details: https://github.com/altaga/DBSE-monitor/blob/master/YoloV3
The function of this model is to detect the driver's emotions at all times and through musical responses (songs) try to correct the driver's mental state, in order to keep him neutral or in a good mood while driving, thus reducing the risk of accidents.
Details: https://github.com/altaga/DBSE-monitor/blob/master/Emotion%20detection
The setup process to run everything on the jetson nano are in this folder:
https://github.com/altaga/DBSE-monitor/tree/master/Jetson
Product:
Product installed inside the car:
Notifications:
Sorry github does not allow embed videos.
I would consider the product finished as we only need a little of additional touches in the industrial engineering side of things for it to be a commercial product. Well and also a bit on the Electrical engineering perhaps to use only the components we need. That being said this functions as an upgrade from a project that a couple friends and myself are developing and It was ideal for me to use as a springboard and develop the idea much more. This one has the potential of becoming a commercially available option regarding Smart cities as the transition to autonomous or even smart vehicles will take a while in most cities.
That middle ground between the Analog, primarily mechanical-based private transports to a more "Smart" vehicle is a huge opportunity as the transition will take several years and most people are not able to afford it. Thank you for reading.
Links:
(1) https://medlineplus.gov/healthysleep.html
(2) http://www.euro.who.int/__data/assets/pdf_file/0008/114101/E84683.pdf
(3) https://dmv.ny.gov/press-release/press-release-03-09-2018