Abstract
What does it truly mean to learn with a machine, and are machines capable of engaging in dialogic learning? Generative AI models—capable of producing text, images, or other content in response to prompts—are rapidly reshaping educational discourse by introducing 'scalable' forms of personalised learning, while also raising challenges around academic integrity and the need to redefine what critical thinking entails in the context of learning with AI. More recently, features within popular Generative AI models like ChatGPT's " Study and Learn mode " (which guides learners with questions instead of just giving answers) and Google Gemini's ''Learn Your Way'' (which transforms textbooks into interactive, AI-driven study guides) are being marketised on the promise of more conversational, personalised learning experiences that are fine tuned for learning based on education research and principles. Within higher education, the growing presence of these systems demands deeper exploration. Are they genuinely expanding the possibilities for dialogue and feedback, or quietly reshaping the conditions of academic exchange? As practitioners, we must ask not only how these systems work, but also why they are used—and for whom? Do they stimulate enquiry, or do they replace the productive discomfort of genuine dialogue with frictionless interactions that risk remaining superficial (Tang et al, 2024; Wu et al., 2025)?