Abstract
Medical reports can contain technical vocabulary, abbreviations, and technical terms that are barriers to patient comprehension, with impacts on health literacy and decision-making. This article describes the design and evaluation of an AI system to simplify automated medical reports, generate visual aids, and interactive question answering. The system includes OCR for text extraction, NLP pipelines for terminology simplification, and a Q&A module using large language models (LLMs). Three models, ChatGPT, ClinicalBERT, and DeepSeek, were comparatively evaluated on four tasks: term extraction, explanation quality, term-to-image mapping, and relevance in dialogue. DeepSeek performed the best among all models with 0.92 F1-score, 0.84 BLEU, and 88% visual mapping success. A hybrid pipeline integrating BioBERT with generative LLMs improved accuracy by 12% over single-model baselines. Findings indicate that a blend of domain-specific extractors and generative models provides a strong methodology for enhancing patient-focused medical communication.