Abstract
In healthcare communication, technical jargon in medical reports is hard for patients to understand. This lack of clarity may lead to follow-up appointments and non-adherence to treatment, with adverse effects on patient outcomes. Despite its profound impact, there are few practical solutions to bridging this communication gap. To address this, we developed an AI-powered radiology report analyzer. Using Natural Language Processing, our tool identifies complex medical jargon in uploaded reports and provides simplified definitions along with relevant visual aids. These terms are clickable, also allowing patients to query using a chat-based interface, transforming static reports into dynamic and user-friendly experiences. We tested our tool on radiology reports from The NationalRad Sample Reports on readability, understandability, and actionability. Compared with the standard reports, which were at a university reading level on average and graded "difficult," our translated versions were at a seventh-grade reading level. Our tool scored better on readability and understandability metrics: Flesch‐Kincaid Grade Level (7.9 ± 1.2 vs 12.1 ± 0.3), Gunning‐Fog Score (9.2 ± 1.1 vs 16.4 ± 0.2), and Coleman‐Liau Index (9.9 ± 0.9 vs 15.6 ± 1.1), all statistically significant (P < .05). The tool also indicated higher PEMAT-P scores in understandability (93.8% vs 56.4%) and actionability (45.0% vs 30.6%). The pilot study demonstrates the effectiveness of our system in augmenting health literacy and enabling informed decision-making and thus promoting patient-centered care. Future development will focus on expanding visual aid coverage for non-anatomic terms for greater comprehension.