The federal government estimates distracted driving contributes to more than 3,000 fatal vehicle crashes annually in the United States, prompting researchers at the University of Tennessee at Chattanooga to explore new ways of predicting and preventing inattentive driving behavior. By integrating advanced sensing technologies, machine learning algorithms and virtual simulation environments, UTC researchers are working to predict driver distraction—and then use that information to deliver timely, data-driven alerts.
The research takes multiple driver behavioral characteristics into account: seating position, emotion, steering wheel grip and eye tracking. These are measured while test drivers navigate a virtual traffic environment from behind the wheel of a driving simulator housed in the UTC Center for Urban Informatics and Progress (CUIP).
The simulator’s virtual driving scenario is based on data anonymously collected by remote sensors, computing resources and experimental wireless networks in Chattanooga’s CUIP-established smart corridor. Over the more than 100 signalized intersections in the corridor, data is compiled on the flow of traffic, weather conditions and movements of vehicles, pedestrians, joggers and other factors to be considered by motorists in the area. Test driver reactions to the simulation are detected through a variety of cameras tracking the driver’s seating position, gaze, eye movements, facial expressions and steering wheel grip.
The project is led by Dr. Maged Shoman, a research assistant professor in Intelligent Transportation Systems with the University of Tennessee-Oak Ridge Innovation Institute (UT-ORII), part of the UT-ORII Energy Storage and Transportation Convergent Research Initiative.
“My research is at the intersection of Transportation, Deep Learning and Computer Vision. We’re focusing on very challenging problems to make transportation safer, connected, and autonomous,” said Shoman, who is based with CUIP.
“We’re able to observe, for instance, how a driver’s seat posture shifts when comparing a distracted state to an attentive one,” Shoman said. “By correlating eye gaze patterns, facial expressions and body language with contextual factors like approaching intersections, changing weather or the presence of occluded pedestrians, we can develop algorithms that accurately recognize and predict inattention.
“When we’re able to measure distraction, we can understand it very well; and from there, we can use this data to predict if a driver will be distracted and, eventually, alert the driver to be more attentive.”
The research conducted with the CUIP driving simulator uses cameras mounted in the “cockpit” area of the simulator, wearable devices riddled with small cameras and sensors that compile info on driver behavior—along with a data-gathering app loaded onto the driver’s mobile phone.
Transitioning from the controlled simulator to live trials along the CUIP smart corridor will present a new set of technical challenges. Real-world data acquisition is subject to unpredictable lighting, weather fluctuations and a wide range of driver behaviors and demographic differences. These conditions necessitate advanced “domain adaptation techniques,” such as transfer learning and fine-tuning model parameters to handle variable input quality.
In the longer term, a single dashboard-mounted camera connected to a compact, energy-efficient AI module could continuously analyze driver posture, eye movements and facial expressions, issuing real-time, context-aware alerts the instant it detects signs of drifting attention. This system could also interface with vehicle telematics—telecommunications and information-processing technology—to log incident data, further refining prediction models through continuous feedback loops.
“We cannot guarantee that every alert will prevent a crash,” Shoman said, “but by proactively predicting driver distraction, we increase the probability of avoiding crashes due to inattentive driver behavior. By alerting drivers at the earliest signs of inattention, he added, the system allows timely course corrections, improving safety not only for drivers but for all vulnerable road users.
###
About Dr. Maged Shoman
Research interests: Deep Learning, Computer Vision, Transportation and Traffic Safety Research, Autonomous and Connected Vehicles, Digital Twins and Smart Cities and Intelligent Transportation Systems.