Jaguar Land Rover (JLR) has developed a contactless touchscreen interface, in conjunction with Cambridge University, which it hopes will reduce the need for drivers to take their eyes off the road when operating vehicle systems and reduce the spread of viruses and bacteria.
The auto maker said that the patented technology, known as ‛predictive touch’, uses artificial intelligence and sensors to predict a user’s intended target on the touchscreen – whether that’s satellite navigation, temperature controls or entertainment settings – without touching a button.
The system is part of Jaguar Land Rover’s Destination Zero vision, which is characterized by a desire to make its vehicles safer and the environment cleaner and healthier. It said that technologies such as predictive touch represent a step along the road to addressing the wider mobility landscape, encompassing how customers connect with mobility services and the infrastructure required to enable fully integrated, autonomous vehicles in cities.
According to JLR, lab tests and on-road trials showed the predictive touch technology could reduce a driver’s touchscreen interaction effort and time by up to 50%. It highlighted that poor road surfaces can often cause vibrations which make it difficult to select the correct button on a touchscreen. This means drivers must take their attention away from the road, increasing the risk of an accident.
JLR said that the use of AI allows the system to determine the item the user intends to select on the screen early in the pointing task, speeding up the interaction. A gesture tracker uses vision-based or radio frequency-based sensors, which are increasingly common in consumer electronics, to combine contextual information such as user profile, interface design and environmental conditions with data available from other sensors, such as an eye-gaze tracker, to infer the user’s intent in real time.
The auto maker stated that the software-based solution for contactless interactions has reached high technology readiness levels and can be seamlessly integrated into existing touchscreens and interactive displays, so long as the correct sensory data is available to support the machine learning algorithm.