The Role of Embarrassment in Preferences for AI Versus Human Medical Advisors

📌 PROJECT SCOPE

  • Project Context: Master’s-level research project for PSYC 5120: Research Methods at the University of Idaho.

  • Team: Three-member collaborative research group

  • Method: Experimental survey design with quantitative data analysis

  • Tools: Qualtrics (survey deployment), JASP (statistical analysis), Microsoft Word & Excel (documentation and data management)


Summary

This case study explores how anticipated embarrassment influences people’s preferences for seeking medical advice from an AI chatbot versus a human doctor. In a 2 × 2 between-subjects design, 32 participants imagined a poison ivy rash located either on the hands (low embarrassment) or on both the hands and groin (high embarrassment), and then considered consulting either an AI chatbot or a doctor. The key finding was a significant interaction: participants predicted the most embarrassment when imagining consulting a doctor about the more intimate condition (hands+groin), whereas embarrassment remained comparatively lower when consulting AI. Preference analyses further showed that participants most favored a hybrid approach—consulting AI first, then seeing a doctor if needed.


Our Process

To ensure collaboration and rigor, our team followed a structured workflow:

  • Collaborative Planning: We coordinated through a shared Word document for version control and transparency.

  • Virtual Meetings: Held Zoom sessions to refine research questions, finalize design details, and align on next steps.

  • Survey Distribution: Shared our Qualtrics survey with classmates, friends, and family to gather diverse responses.

  • Preliminary Research: Conducted a literature review, evaluated scholarly sources, took focused notes, and cited them accurately in our manuscript.

  • Division of Work: Split manuscript sections among team members, ensuring balanced contributions and expertise.

  • Continuous Communication: Maintained open channels for feedback and updates, enabling smooth integration of all components.

  • Final Deliverable: Synthesized findings, analyses, and references into a cohesive, APA-formatted manuscript.


Problem

Many people delay or avoid medical care when symptoms feel embarrassing or stigmatizing. Prior research shows that embarrassment reduces disclosure, which can lead to misdiagnosis and poorer health outcomes. With the rapid adoption of AI chatbots like ChatGPT, an open question remains:

Can AI reduce embarrassment in sensitive health situations and change how people prefer to seek medical advice?


Research Questions & Hypotheses

  • Does embarrassment vary by rash location (hands vs. hands+groin) and advisor (AI vs. doctor)?

  • Does embarrassment affect consultation preferences (AI-only, doctor-only, AI-first→then-doctor)?

Hypothesis 1: Higher embarrassment scenario will increase likelihood of using AI.

Hypothesis 2: Higher embarrassment scenario will decrease likelihood of consulting a doctor.


Study Design

We conduced a 2 x 2 between-subjects experiment. Participants imagined developing a poison ivy rash and were randomly assigned to one of four conditions (Rash location: hands only vs. hands+groin) × Advisor (AI vs. doctor). After reading the scenario, they rated anticipated embarrassment (0–10) and consultation preferences (1–7) for AI-only, doctor-only, and AI-first→then-doctor.

Measures

  • Embarrassment: Visual Analog Scale (0–10)

  • Preference Ratings (1–7 Likert):

    • AI only

    • Doctor only

    • AI first, then doctor

Analysis

We used:

  • Two-way ANOVA

    • Rash location × Advisor on embarrassment

  • Repeated Measures ANOVA

    • Preference type (AI-only, Doctor-only, AI-first-then-doctor)

  • Post-hoc paired t-tests for preference comparisons


Key Findings

Embarrassment Depends on the Combination of Condition + Advisor

There was no main effect of rash location or advisor alone.

However, there was a significant interaction:

  • Participants imagining a doctor + groin rash reported the highest embarrassment of all groups.
    F(1, 28) = 4.43, p = .044, ηp² = .137

This means embarrassment is not just about the condition — it is about who you have to tell.

Users Prefer a Hybrid Model

There were no overall differences between AI-only and Doctor-only preferences.
But participants significantly preferred:

  • AI first, then doctor over doctor only
    t(30) = –2.99, p = .017

This suggests people see AI as a low-barrier entry point, not a replacement for doctors.


Why Our Hypotheses Were Not Fully Supported

We expected embarrassment alone to push people toward AI.

Instead, embarrassment only increased when:

  • The condition was intimate and the advisor was human.

This means embarrassment is driven by anticipated social judgment, not just symptom sensitivity.


Impact & Outcome

This study highlights AI’s unique psychological value: reducing social evaluation pressure at the most vulnerable moment — the first disclosure.

For healthcare UX, this suggests:

  • AI should be designed as a safe first step, not an endpoint.

  • Systems should make it seamless to escalate from AI to a clinician.

What I’d Improve Next Time

  • Larger and more diverse sample

  • Real-world interactions with actual AI tools

  • Longitudinal study to see if AI reduces actual care avoidance, not just intentions

Final Takeaway

People don’t just want answers — they want to feel safe asking the question.
AI can remove the emotional friction that stops people from seeking care in the first place.