Automotive Interiors World
  • News
    • A-C
      • ADAS
      • Aftermarket
      • Augmented Reality
      • Automotive Interiors Expo
      • Autonomous
      • Concepts
      • Connectivity
    • D-L
      • Dash
      • Displays
      • EV
      • Graphics & Printing
      • HMI
      • HVAC
      • Infotainment
      • Lighting
      • Luxury
    • M-S
      • Materials
      • NVH & BSR
      • Personal Assistants
      • Safety
      • Seating
      • Sensors
      • Simulation
      • Sound System
    • S-V
      • Sport
      • Surfaces & Decoration
      • Sustainability
      • Testing
      • Trim
      • Virtual Reality
  • Features
  • Online Magazines
    • May 2020
    • October 2019
    • May 2019
    • October 2018
    • May 2018
    • Subscribe Free!
  • Opinion
  • Videos
  • Supplier Spotlight
LinkedIn Facebook Twitter
  • Sign-up for Free Weekly E-Newsletter
  • Meet the Editors
  • Contact Us
  • Media Pack
LinkedIn Facebook
Subscribe
Automotive Interiors World
  • News
      • 3D Printing
      • ADAS
      • Aftermarket
      • Augmented Reality
      • Automotive Interiors Expo
      • Autonomous
      • Computing
      • Concepts
      • Connectivity
      • Dash
      • Displays
      • EV
      • Graphics & Printing
      • Haptics
      • HMI
      • HVAC
      • Infotainment
      • Lighting
      • Luxury
      • Materials
      • Microprocessors
      • NVH & BSR
      • Personal Assistants
      • Safety
      • Seating
      • Sensors
      • Simulation
      • Sound System
      • Sport
      • Surfaces & Decoration
      • Sustainability
      • Testing
      • Trim
      • Virtual Reality
  • Features
  • Online Magazines
    1. May 2020
    2. October 2019
    3. May 2019
    4. October 2018
    5. May 2018
    6. Subscribe Free!
    Featured
    April 3, 2020

    In this Issue – May 2020

    By Helen NormanApril 3, 2020
    Recent

    In this Issue – May 2020

    April 3, 2020

    In this Issue – October 2019

    September 19, 2019

    In this Issue – May 2019

    April 30, 2019
  • Opinion
  • Videos
  • Supplier Spotlight
  • Events
Facebook Instagram
Subscribe
Automotive Interiors World
Features

How a vision-based system is automatically detecting inattention at the wheel

Helen NormanBy Helen NormanNovember 8, 20195 Mins Read
Share LinkedIn Twitter Facebook Email
Driver distraction

ARRK Engineering has carried out a number of tests using deep learning and CNN models to identify driver distraction as a result of activities such as eating, drinking and making phone calls. Here’s what the company found out 

According to a report by the World Health Organization (WHO), each year about 1.35 million people die in traffic accidents and another 20 to 50 million are injured. One of the main causes is driver inattention.

For years, the automotive industry has installed systems that warn in case of driver fatigue. These driver assistants analyze, for example, the viewing direction of the driver, and automatically detect deviations from normal driving behavior. “Existing warning systems can only correctly identify specific hazard situations,” reports Benjamin Wagner, senior consultant for Driver Assistance Systems at ARRK Engineering. “But during some activities like eating, drinking and phoning, the driver’s viewing direction remains aligned with the road ahead.”

For that reason, ARRK Engineering ran a series of tests to identify a range of driver postures so systems can automatically detect the use of mobile phones and eating or drinking. For the system to correctly identify all types of visual, manual and cognitive distraction, ARRK tested various CNN models with deep learning and trained them with the collected data.

Creation of the first image dataset for teaching the systems

In the test setup, two cameras with active infrared lighting were positioned to the left and right of the driver on the A-column of a test vehicle. Both cameras had a frequency of 30 Hz and delivered 8-bit grayscale images at 1280 x 1024 pixel resolution.

“The cameras were also equipped with an IR long-pass filter to block out most of the visual spectrum light at wavelengths under 780nm,” explains Wagner. “In this manner we made sure that the captured light came primarily from the IR LEDs and that their full functionality was assured during day and night time.”

In addition, blocking visible daylight prevented shadow effects in the driver area that might otherwise have led to mistakes in facial recognition. A Raspberry Pi 3 Model B+ sent a trigger signal to both cameras to synchronize the moment of image capture.

With this setup, images were captured of the postures of 16 test persons in a stationary vehicle. To generate a wide range of data, the test persons differed in gender, age, and headgear, as well as using different mobile phone models and consuming different foods and beverages.

“We set up five distraction categories that driver postures could later be assigned to. These were: ‘no visible distraction,’ ‘talking on smartphone,’ ‘manual smartphone use,’ ‘eating or drinking’ and ‘holding food or beverage,’” explains Wagner. “For the tests, we instructed the test persons to switch between these activities during simulated driving.”

After capture, the images from the two cameras were categorized and used for model training.

Training and testing the image classification systems

Four modified CNN models were used to classify driver postures: ResNeXt-34, ResNeXt-50, VGG-16 and VGG-19. The last two models are widely used in practice, while ResNeXt-34 and ResNeXt-50 contain a dedicated structure for processing of parallel paths.

To train the system, ARRK ran 50 epochs using the Adam optimizer, an adaptive learning rate optimization algorithm. In each epoch, the CNN model had to assign the test persons’ postures to the defined categories. With each step, this categorization was adjusted by a gradient descent method, so that the fault rate could be lowered continuously.

After model training, a dedicated test dataset was used to calculate the error matrix which allowed an analysis of the fault rate per driver posture for each CNN model.

“The use of two cameras, each with a separately trained CNN model, enables ideal case differentiation for the left and right side of the face,” explains Wagner. “Thanks to this process, we were able to identify the system with the best performance in recognizing the use of mobile phones and consumption of food and beverages.”

Evaluation of the results showed that the ResNeXt-34 and ResNeXt-50 models achieved the highest classification accuracy, 92.88 percent for the left camera and 90.36% for the right camera. This is absolutely competitive with existing solutions for detection of driver fatigue.

Using this information, ARRK has extended its training database which now contains around 20,000 labeled eye data records. Based on this, it is possible to develop an automated vision-based system to validate driver monitoring systems.

ARRK Engineering’s experts are already planning another step to further reduce the fault rate. “To further improve accuracy, we will use other CNN models in a next project,” notes Wagner. “Besides evaluation of different classification models, we will analyze whether the integration of associated object positions from the camera image can achieve further improvements.”

In this context, approaches will be considered that are based on bounding box detection and semantic segmentation. The latter enable, in addition to classification, different levels of detail regarding the localization of objects. In this way, ARRK can improve the accuracy of driver assistance systems for the automatic detection of driver distraction.

Share. LinkedIn Twitter Facebook Email
Previous ArticleInfiniti introduces new-generation infotainment system
Next Article DS monitoring driver attention ahead of new laws

Read Similar Stories

Features

From concept to cockpit: Harman’s path to the future of automotive technology

May 28, 20257 Mins Read
Features

The ‘golden ears’ that fine-tune Nissan audio systems

May 2, 20253 Mins Read
ADAS

EXPO NEWS: Show highlights from Automotive Testing Expo India 2025

April 10, 20254 Mins Read
Latest News

The latest berlinetta front-mid-engine V8 2+ coupé: Ferrari Amalfi

July 2, 2025

Audi and AirConsole team up on in-car gaming

July 2, 2025

TMG and Haartz invest in US automotive interior surface materials manufacturing facility

July 2, 2025

Receive breaking stories and features in your inbox each week, for free


Enter your email address:


Supplier Spotlights
  • Kaneka Belgium
Getting in Touch
  • Contact Us / Advertise
  • Meet the Editors
  • Download Media Pack
  • Free Weekly E-Newsletter
Our Social Channels
  • LinkedIn
  • Facebook
UKi Media & Events
Related UKi Topics
  • Automotive Testing
  • Automotive Powertrain
  • Autonomous Vehicle
  • Professional Motorsport
  • Tire Technology
  • Media Pack
© 2025 UKi Media & Events a division of UKIP Media & Events Ltd
  • Terms and Conditions
  • Privacy Policy
  • Cookie Policy
  • Notice and Takedown Policy

Type above and press Enter to search. Press Esc to cancel.

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “ACCEPT ALL”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Cookie settingsREJECTACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled

Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.

Functional

Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.

Performance

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

Analytics

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.

Advertisement

Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.

Others

Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.

SAVE & ACCEPT
Powered by