Automotive Interiors World
  • News
    • A-C
      • ADAS
      • Aftermarket
      • Augmented Reality
      • Automotive Interiors Expo
      • Autonomous
      • Concepts
      • Connectivity
    • D-L
      • Dash
      • Displays
      • EV
      • Graphics & Printing
      • HMI
      • HVAC
      • Infotainment
      • Lighting
      • Luxury
    • M-S
      • Materials
      • NVH & BSR
      • Personal Assistants
      • Safety
      • Seating
      • Sensors
      • Simulation
      • Sound System
    • S-V
      • Sport
      • Surfaces & Decoration
      • Sustainability
      • Testing
      • Trim
      • Virtual Reality
  • Features
  • Online Magazines
    • May 2020
    • October 2019
    • May 2019
    • October 2018
    • May 2018
    • Subscribe Free!
  • Opinion
  • Videos
  • Supplier Spotlight
  • Awards
    • Automotive Interiors World Awards
    • Automotive Interiors World Awards – Nomination form
LinkedIn Facebook Twitter
  • Sign-up for Free Weekly E-Newsletter
  • Meet the Editors
  • Contact Us
  • Media Pack
LinkedIn Facebook
Subscribe
Automotive Interiors World
  • News
      • 3D Printing
      • ADAS
      • Aftermarket
      • Appointments, Partnerships, Investments & Acquisitions
      • Augmented Reality
      • Automotive Interiors Expo
      • Autonomous
      • Computing
      • Concepts
      • Connectivity
      • Dash
      • Displays
      • EV
      • Graphics & Printing
      • Haptics
      • HMI
      • HVAC
      • Infotainment
      • Lighting
      • Luxury
      • Materials
      • Microprocessors
      • NVH & BSR
      • Personal Assistants
      • Safety
      • Seating
      • Sensors
      • Simulation
      • Sound System
      • Sport
      • Surfaces & Decoration
      • Sustainability
      • Testing
      • Trim
      • Virtual Reality
  • Features
  • Online Magazines
    1. May 2020
    2. October 2019
    3. May 2019
    4. October 2018
    5. May 2018
    6. Subscribe Free!
    Featured
    April 3, 2020

    In this Issue – May 2020

    By Helen NormanApril 3, 2020
    Recent

    In this Issue – May 2020

    April 3, 2020

    In this Issue – October 2019

    September 19, 2019

    In this Issue – May 2019

    April 30, 2019
  • Opinion
  • Videos
  • Awards
    • Automotive Interiors World Awards
  • Supplier Spotlight
  • Events
Facebook Instagram
Subscribe
Automotive Interiors World
Features

INTERVIEW: Motive’s Nyanya Joof on driver monitoring and safety

Zahra AwanBy Zahra AwanFebruary 23, 20266 Mins Read
Share LinkedIn Twitter Facebook Email
INTERVIEW: Motive's Nyanya Joof on driver monitoring and safety.

At the start of the year, Motive launched the AI Dashcam Plus, a single device combining advanced AI capabilities, hands-free communication and integrated hardware. Designed to reduce collisions, the release was accompanied by Motive’s AI Road Safety Report, which analyzed 1.2 billion hours of dashcam footage from 2024 to 2025 to uncover when, where and why collisions occurred. AAVI got some insights from Nyanya Joof, head of UK at Motive

The AI Dashcam Plus is powered by the Qualcomm Dragonwing QCS6490 AI processor, enabling it to run more than 30 AI models simultaneously. How can this multimodel capability reduce latency and false positives in real-time risk detection compared with traditional single-model AI dashcams?
At its core, AI Dashcam Plus is designed to spot risk before something goes wrong – to detect risk faster and prevent more collisions. By using Qualcomm’s Dragonwing QCS6490 processor, the device is designed to run more than 30 high-precision AI models at the same time, with roughly three times the compute power of other leading dashcams. This means AI Dashcam Plus can support an expanded range of detection capabilities, designed to spot more unsafe behaviors like forward collision warning, lane swerving and closing following in real time with higher accuracy and less latency.

Instead of analyzing driver behavior, vehicle movement and road conditions one at a time, AI Dashcam Plus is designed to process those signals in parallel – directly on the edge, inside the vehicle. That’s what enables true real-time detection.

Because multiple high-precision models are built to work together, the system can cross-check what it’s seeing across different signals before triggering an alert. This enables faster responses with far fewer false positives, so fleet managers and drivers can get alerts they can trust and act on. That can allow safety teams to shift from reacting after incidents to preventing risk in the moment.

How can stereo vision improve distance and speed estimation for forward collision warning and lane swerving alerts compared with monocular camera systems?
Our 2026 AI Road Safety Report, based on 1.2 billion hours of dashcam data, shows collision risk increases as visibility drops. Shorter days, bad weather and congested roads all compound risk, especially for HGV drivers.

Stereo vision is designed specifically for these conditions. The system gains human-like depth perception from using two synchronized, forward-facing cameras. It can more accurately understand how far away objects are and how quickly they’re approaching.

That depth awareness is designed to improve some of the alerts I mentioned earlier, like forward collision warning and lane swerving alerts. Because the AI runs on the edge, those insights translate into immediate, preventative alerts, which can give drivers more time to react and avoid near misses before they become collisions. Compared with single-camera systems, stereo vision is designed to provide a clearer, more reliable picture of the road ahead when seconds matter most.

AI Dashcam Plus integrates video, audio, telematics, GPS and dual motion sensors. What algorithms or fusion techniques could be used to combine these heterogeneous data sources to better detect complex events such as low-severity collisions or break-ins?
Many safety and security incidents aren’t obvious from a single data source. That’s why AI Dashcam Plus is designed to rely on sensor fusion, combining audio, telematics, GPS and dual motion sensor data to understand context and detect more complex events.

For example, if the system detects the sound of glass breaking, it is built to immediately cross-check that against vibration data and video footage. GPS and telematics then add location and movement context to confirm whether an incident actually occurred.

By correlating signals instead of analyzing them in isolation, the system can detect events that might otherwise be missed, and provide clear evidence when it matters most, especially in edge cases like low-severity collisions or attempted break-ins.

How can heterogeneous computing on such a platform improve real-time AI inference performance, and what considerations are critical for maintaining low latency in safety-critical applications?
In road safety, latency isn’t just a performance metric, it’s a safety issue. AI Dashcam Plus uses an Android-based architecture to optimize AI workloads across Qualcomm’s CPU, GPU and Hexagon DSP. What this can translate to is improved performance and reduced latency on every update, with each handling the tasks it’s well suited for.

This can avoid bottlenecks and allow multiple AI workloads to run simultaneously without delay, which can result in fast, reliable inference even in safety-critical moments.

It also helps our goal of futureproofing the platform. As AI models evolve, workloads can shift to the most efficient engine, which can support faster feature rollout while maintaining the low latency drivers and fleets depend on, meaning enabling future services like two-way calling and voice control without frequent hardware refreshes.

How can the AI Dashcam Plus’s combination of stereo vision, multimodel edge AI and advanced sensor fusion enhance existing ADAS features such as forward collision warning, lane keeping assist and adaptive cruise control?
Stereo vision enables Motive’s AI to judge distance and closing speed with greater accuracy. As mentioned earlier, AI Dashcam Plus’s stereo vision uses two synchronized road-facing lenses to create human-like depth perception. Multimodel edge AI allows AI Dashcam Plus to analyze driver behavior, vehicle dynamics and real-world conditions in parallel. Sensor fusion adds situational context that a single input can’t provide. Working together, this means AI Dashcam Plus can support an expanded range of detection capabilities, designed to spot more unsafe behaviors in real time with higher accuracy and less latency. The result is a clearer, real-time understanding of what’s happening around the vehicle, without delay.

What types of AI models and data inputs are used to ensure the high accuracy of lane swerving detection, and how do you work to minimize false positives in real-time alerts?
Lane swerving is a strong early indicator of fatigue and collision risk. Our lane swerving model can identify lane swerving and send real-time alerts to managers, with alerts to drivers expected in a future release. By flagging three or more swerves within five minutes at speeds of 50mph [80km/h] or higher and compiling them into a single, clear safety event timeline, this new capability is designed to give managers a holistic look at repeated risky behavior in a short window, which can make it easier to understand driving patterns, coach drivers quickly, and prevent fatigue- and distraction-related collisions.

Accuracy comes from scale and validation. Our AI architecture is defined by a full-stack system, including proprietary hardware, and low-latency validation of model outputs to help eliminate false positives. This design allows us to offer a wide breadth of AI models with high precision and recall.

What challenges did your engineering team face in optimizing AI algorithms for low-latency, real-time processing, and how were these overcome?
The most difficult aspect of developing AI for edge hardware is ensuring the algorithms and models are able to run in real time while fitting within the processing power of the device. We leverage techniques such as quantization, optimization and performance tuning of our models to squeeze the most performance possible out of our hardware. This is where our many years of experience in deploying state-of-the-art AI on edge devices is critical.

Share. LinkedIn Twitter Facebook Email
Previous Article“Add a wow factor without being gimmicky” – Stanley Fok discusses scaling GM infotainment systems

Read Similar Stories

Features

“Add a wow factor without being gimmicky” – Stanley Fok discusses scaling GM infotainment systems

February 19, 20264 Mins Read
ADAS

Mahindra selects Mobileye ADAS for six future models

February 18, 20262 Mins Read
Connectivity

Škoda updates infotainment and connected services in Enyaq and Elroq

February 17, 20262 Mins Read
Latest News

INTERVIEW: Motive’s Nyanya Joof on driver monitoring and safety

February 23, 2026

“Add a wow factor without being gimmicky” – Stanley Fok discusses scaling GM infotainment systems

February 19, 2026

Automotive Interiors World Awards 2026: nominations now open!

February 19, 2026

Receive breaking stories and features in your inbox each week, for free


Enter your email address:


Supplier Spotlights
  • Kaneka Belgium
Getting in Touch
  • Contact Us / Advertise
  • Meet the Editors
  • Media Pack
  • Free Weekly E-Newsletter
Our Social Channels
  • LinkedIn
  • Facebook
UKi Media & Events
Related UKi Topics
  • Automotive Testing
  • Automotive Powertrain
  • Autonomous Vehicle
  • Professional Motorsport
  • Tire Technology
  • Media Pack
© 2025 UKi Media & Events a division of UKIP Media & Events Ltd
  • Terms and Conditions
  • Privacy Policy
  • Cookie Policy
  • Notice and Takedown Policy

Type above and press Enter to search. Press Esc to cancel.

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “ACCEPT ALL”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Cookie settingsREJECTACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled

Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.

Functional

Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.

Performance

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

Analytics

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.

Advertisement

Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.

Others

Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.

SAVE & ACCEPT
Powered by