AI vs. Physicians: Varicocele Embolization Patient Education

Source: frontiersin.org

Published on October 1, 2025

AI vs. Physicians in Patient Education for Varicocele Embolization

A study compared how well artificial intelligence (AI) models and clinical specialists inform patients regarding varicocele embolization. The goal was to build a foundation for future hybrid information systems combining AI and clinical expertise.

In a prospective, double-blind, randomized, controlled trial, three AI models (ChatGPT-4o, Gemini Pro, and Microsoft Copilot) and one interventional radiologist answered 25 frequently asked questions regarding varicocele embolization. These questions were gathered from Google Search trends, patient forums, and clinical experience. Two independent interventional radiologists assessed the responses using a 5-point Likert scale for both academic accuracy and empathy. The responses were presented in random order.

Results

Gemini had the highest average scores for academic accuracy (4.09 ± 0.50) and empathetic communication (3.54 ± 0.59). Copilot followed with scores of 4.07 ± 0.46 for academic accuracy and 3.48 ± 0.53 for empathy. ChatGPT scored 3.83 ± 0.58 for academic accuracy and 2.92 ± 0.78 for empathy. The comparator physician scored 3.75 ± 0.41 for academic accuracy and 3.12 ± 0.82 for empathy.

ANOVA showed statistically significant differences among the groups in both academic accuracy (F = 6.181, p < 0.001, η² = 0.086) and empathy (F = 9.106, p < 0.001, η² = 0.122). The effect size was medium for academic accuracy and large for empathy.

Conclusions

Expert evaluators gave AI models, particularly Gemini, higher ratings than the comparator physician in educating patients about varicocele embolization. They excelled in both academic accuracy and empathetic communication. These initial results indicate that AI models could significantly enhance patient education systems in interventional radiology and offer evidence supporting the creation of hybrid patient education models.