Empathic AI in Response to Stressful Driving Situations
Project Stories9 min readJuly 1, 2025

Empathic AI in Response to Stressful Driving Situations

User study exploring how voice-based AI assistants can provide emotional support to drivers after stressful driving events

UX ResearchAffective ComputingHuman-AI InteractionAutomotiveUser Study

Introduction

Empathic AI in Response to Stressful Driving Situations is a research project from the 6th semester (SS 2025) at the Technische Hochschule Ingolstadt by Hamza Dursun, Selin Durmus, and Mona Jeske.

The project investigates whether an empathic voice assistant can reduce drivers' perceived stress, provide emotional support, and improve the acceptance of such systems after stressful driving situations.

While modern Advanced Driver Assistance Systems (ADAS) increasingly monitor the driver's state, the step beyond pure detection – the emotional support of the driver – remains largely unexplored.

Research Poster Overview


Problem Statement

Stressful driving events such as near-accidents or high cognitive load can impair both driving safety and emotional well-being. While physiological stress detection has already been researched, there is little empirical work on how voice-based AI assistants can help regulate stress and provide emotional support after such events.

This study bridges the gap between Affective Computing and In-Vehicle Interaction Design.

Main Hypotheses:

  • H1: Participants report lower perceived stress with an empathic AI assistant
  • H2: Participants perceive higher emotional support from the system
  • H3: Participants show greater willingness to use the empathic AI assistant in real driving situations
  • H4: Participants feel a higher sense of safety during stressful driving situations with AI support

Methodology

Study Design

The study used a Within-Subject Design with 20 participants (age: 25–45 years). Each participant went through a driving session with two stress-inducing scenarios in a high-fidelity hexapod simulator:

Scenario 1: Challenging Maneuver (Sharp Cornering)

A demanding curve sequence that required advanced driving skills and induced technical stress.

Scenario 2: Sudden Hazard

A critical event where a child unexpectedly appeared on the road, requiring immediate braking. The vehicle's automatic braking system was deactivated during this event.

In the Hexapod Simulator

Each scenario was conducted once with and once without the empathic AI voice assistant, with the order balanced across participants.

Voice-AI Intervention

The empathic voice assistant delivered a standardized message, about 30 seconds long, immediately after the stressful driving event. The message aimed to support emotion regulation and the perceived sense of control through calming, reinforcing, and affirming statements.

Example Message:

"That was a close moment, but you handled it very well. You stayed calm and made the right decisions."

The design is based on established psychological theories:

  • Emotion Regulation Theory emphasizes the importance of cognitive reappraisal and social support in stress management
  • The calming and slow speech output was based on relaxation techniques to reduce physiological arousal

Technical Setup

Experimental Setup

The experimental setup consisted of:

  • Simulation Environment: Driving route developed with IPG CarMaker and imported into the main simulation software
  • User Interface: Integration of visual animation and voice assistant, designed in Adobe XD, displayed on a tablet in the hexapod
  • Remote Control: Tablet connected to experimenter computer via TeamViewer for precise control
  • Communication: Smartphone in the simulator connected to lab computer via Discord

Data Collection

Physiological Data (Heart Rate)

Continuous heart rate measurement with Fitbit wristband during the entire experiment. The data was collected and synchronized via a self-developed Fitbit application.

Analyzed Time Periods:

  • Baseline: 2-minute period before scenario start
  • Stress Peak: 30-second interval immediately after stress event
  • Recovery: 60-second period after AI intervention or after event end

Calculated Metrics:

  • Stress Induction (ΔHR_stress): Change in heart rate from Baseline to Stress Peak
  • Recovery Amount (ΔHR_recovery): Decrease in heart rate from Stress Peak to Recovery Phase

Fitbit Heart Rate Monitoring Application

Subjective Data

After each scenario, participants filled out standardized questionnaires (7-point Likert scales):

  • Perceived Stress
  • Perceived Emotional Support
  • Perceived Distraction
  • Perceived Safety and Control

After the second scenario, a system evaluation was conducted to assess:

  • User Acceptance
  • Perceived Empathy
  • Trust in the Voice Assistant

Results

Physiological Results (Heart Rate)

Stress Induction Comparison

Heart Rate Change after Scenario

Both scenarios induced similar stress levels:

  • Curve Scenario: +4.76 bpm (SD = 5.33)
  • Sudden Hazard: +5.63 bpm (SD = 6.49)
  • Statistics: t(19) = -0.504, p = 0.6198 (not significant)

This was an intended result of the experimental design to ensure that both scenarios generated comparable physiological stress levels.

Recovery Phase (H1)

Heart Rate Recovery with and without AI

  • With AI: -3.20 bpm (SD = 4.42)
  • Without AI: -4.12 bpm (SD = 4.69)
  • Statistics: t(19) = -0.830, p = 0.7915 (not significant)

Result: The presence of the AI assistant did not lead to a statistically significant improvement in physiological recovery. H1 was not supported.

Subjective Results

Comparison of Questionnaire Results

H2: Emotional Support ✓ SUPPORTED

  • With AI: M = 5.11 (SD = 0.76)
  • Without AI: M = 2.42 (SD = 0.98)
  • Statistics: p < .001, d_z = 1.38 (very large effect)

Perceived emotional support was significantly higher in the AI condition.

System Helpfulness

  • With AI: M = 4.05 (SD = 0.82)
  • Without AI: M = 2.16 (SD = 1.05)
  • Statistics: p = .002, d_z = 0.87

H4: Sense of Safety (Weak Trend)

  • With AI: M = 4.53 (SD = 0.91)
  • Without AI: M = 4.00 (SD = 0.89)
  • Statistics: p = .403, d_z = 0.26 (not significant, trend only)

H3: User Acceptance ✓ SUPPORTED

Distribution of User Preferences

85% (17/20) of participants reported feeling better supported with the voice assistant. Only 15% (3/20) saw no difference.

Qualitative Feedback

What Participants Appreciated:

Positive Qualitative Themes

Voice Tone & Empathy:

  • "Felt genuinely understood." (12/20)
  • "The AI sounded calm and reassuring, which reduced my anxiety." (9/20)

Timing & Contextual Fit:

  • "Receiving the message immediately after the event felt just right." (15/20)

What Was Criticized:

Negative Qualitative Themes

Conciseness & Clarity:

  • "The message was too long and repetitive." (5/20)
  • "Shorter prompts with bullet points would be ideal." (7/20)

Timing Issues:

  • "Sometimes it interrupted my focus—would help if I could delay." (4/20)

Customization & Control:

  • "Would be better if it addressed me by name or adapted to my driving style." (8/20)
  • "An option to switch to visual cues would improve usability." (6/20)

Discussion & Interpretation

The study shows a dissociation between physiological measurements and subjective evaluations:

  • Physiological: No significant improvement in heart rate recovery
  • Subjective: Strong and significant improvements in emotional support and system acceptance

Possible Explanations:

  1. Cognitive Reappraisal: The empathic AI assistant likely enabled a reinterpretation of the stressful event, reducing the perceived threat even if physiological arousal remained unchanged.

  2. Expectation and Novelty: Interaction with an empathic voice agent might have triggered positive affect through expectation violation and novelty.

  3. Attention Allocation: The AI's verbal support might have redirected participants' attention from internal stress signals to external reassurance.

  4. Self-Report Sensitivity: Subjective measures (Likert scales) might be more sensitive to perceived social presence than physiological signals like heart rate.


Limitations & Future Work

Identified Limitations:

  1. Physiological Measurement: Only Heart Rate (HR) was measured. Heart Rate Variability (HRV) would provide more robust insights into autonomic nervous system regulation.

  2. Experimental Design: No full balancing of scenario type and intervention. Future studies should use Latin Square or fully balanced designs.

  3. Message Length & Timing: Several participants found the message too long or repetitive. Delayed empathic reactions could reactivate stress instead of reducing it.

  4. Risk Compensation Effect: Reassuring feedback could lead to overestimation of one's abilities or reduced vigilance.

  5. Sample Size: N=20 is consistent with exploratory HCI studies but should be confirmed with larger, more diverse samples.

Future Research Directions:

  1. Sophisticated Sensors: Integration of HRV, Electrodermal Activity (EDA), and other indicators

  2. Adaptive Systems: Dynamic adaptation of content, timing, and modality (voice, haptic feedback, ambient light) based on real-time affect sensing

  3. Longitudinal Studies: Observation over weeks/months to assess long-term effects on trust and well-being

  4. Personalization: Consideration of individual personality traits and preferences


Design Implications

Based on the results, the following design recommendations arise for empathic in-vehicle systems:

  1. User Control & Customization: Adjustable message length, volume, and feedback modality (visual vs. auditory)

  2. Multimodal Integration: Combination of voice feedback with subtle visual or haptic cues (e.g., calming animations, ambient lighting)

  3. Context-Aware Safety Protocols: Context awareness to avoid false reassurance in actually dangerous situations

  4. Conciseness & Timing: Shorter, better-timed, and adaptive support


Ethical Considerations

While empathic AI can improve perceived emotional support, it typically provides standardized responses that only simulate empathy. This raises ethical questions regarding:

  • Authenticity & Transparency
  • Potential for Manipulation
  • Parasocial Bonds
  • Overestimation of Own Abilities through permanent AI support

These aspects require careful design and responsibility.


Conclusion

The empathic, voice-based AI assistant did not lead to significant improvements in physiological recovery but was highly appreciated by participants on a subjective level.

Key Findings:

  • ✓ Significantly higher emotional support (H2 supported)
  • ✓ High User Acceptance (H3 supported, 85% agreement)
  • ✗ No physiological improvement (H1 not supported)
  • ~ Slight trend in sense of safety (H4 weak trend)

The results provide actionable insights for the development of emotionally responsive in-vehicle AI systems and emphasize the importance of conciseness, adaptivity, user control, and multimodal design.

Future systems should provide shorter, better-timed, and adaptive support, possibly integrated with real-time affect sensing.


Technical Details & Repository

The project included the development of a custom Fitbit application for precise heart rate measurement and synchronization with experimental events.

GitHub Repository: https://github.com/hd2386/fitbit

The application enabled:

  • Continuous HR data collection
  • Manual logging of start times (hour, minute, second) of each experimental phase
  • Precise temporal alignment of event markers with physiological data
  • Data retrieval via the Fitbit API

Team & Acknowledgments

Project Team:

Special Thanks:

  • Claus Pfeilschifter for simulator support
  • Markus Weißenberger for technical support at CARISSMA
  • Prof. Dr. Ignacio Alvarez (Technische Hochschule Ingolstadt) for valuable feedback

Institution: Technische Hochschule Ingolstadt (THI) Esplanade 10, 85049 Ingolstadt, Germany


This project was conducted as part of the course FWS/SDUT 2025 at the Technische Hochschule Ingolstadt.

The complete bibliography and detailed method description can be found in the full Research Paper.

Keywords: Empathic voice assistant, driver stress, in-vehicle interaction, affective computing, user study, human–AI interaction, physiological sensing, heart rate