Learning Well in the Age of AI: Why Verified Course Materials Remain Central to Student Success
Generative artificial intelligence has become a routine feature of the academic workflow in higher education. Tools such as ChatGPT, Claude, Gemini, and Perplexity are now used widely by students for studying, writing, and exam preparation, with peer-reviewed research documenting rapid adoption across institutions, disciplines, and regions (Wang et al., 2024; Hon, 2026).
This shift raises substantive questions for the sector. What does effective learning look like when generative AI can produce a passable essay in seconds? How should students evaluate the information AI systems provide? And what role do traditional course materials play in an environment where information is more accessible, but not necessarily more reliable, than at any prior point in higher education?
The peer-reviewed literature does not yet offer a single answer, but it does establish a consistent set of findings. Generative AI offers genuine educational benefits when used appropriately. It also introduces well-documented risks when used as a substitute for engagement with course content. The most defensible position emerging from this body of research is one of complementarity: AI tools and verified course materials serve fundamentally different functions, and student outcomes appear to depend heavily on how the two are combined. From this evidence, Textbook Brokers takes the position that high-quality, instructor-selected course materials remain a critical foundation for student success in an AI-driven environment.
The Expanding Role of AI in Higher Education
Recent literature confirms that AI adoption in higher education has accelerated rapidly. A 2024 systematic review published in Expert Systems with Applications documented AI's growing influence across instructional delivery, personalized learning, assessment, and student engagement (Wang et al., 2024). A 2026 systematic literature review in the Journal of Educational Technology Systems synthesized empirical studies on generative AI specifically and reported mixed findings: some studies identified gains in engagement and performance, while others documented concerns, including over-reliance on AI and uneven effectiveness across disciplines (Hon, 2026).
The benefits identified in this literature are real. AI systems can explain a concept in multiple ways, generate practice questions, summarize dense passages, and provide on-demand academic support outside of office hours. For students balancing coursework with employment, caregiving, or other obligations, these capabilities represent a meaningful expansion of access.
Documented Limitations of Generative AI
The same body of research, however, identifies clear limitations.
Reliability is the most extensively documented concern. Generative AI systems are known to produce hallucinations: confident, fluent output that is factually incorrect or entirely fabricated. In student-facing applications, hallucinations can manifest as fabricated citations with non-existent authors, false historical facts, invented statistics, and inaccurate explanations of complex concepts presented with apparent authority. Because such outputs are read as credible, students without strong domain knowledge may have difficulty distinguishing accurate information from fabricated content.
A second concern, more subtle but increasingly well documented, concerns the cognitive processes underlying durable learning. A 2024 study published in the British Journal of Educational Technology compared learners using ChatGPT with those receiving support from human experts and other resources. The authors found that while the ChatGPT group demonstrated short-term improvements in essay scores, their knowledge gain and transfer were not significantly different from those of other groups. More notably, the convenience of AI appeared to undermine engagement in the self-regulated learning processes, including planning, monitoring, and revision, that support deeper learning. The researchers warned that generative AI may promote dependence on technology and trigger what they termed metacognitive laziness (Fan et al., 2024).
Subsequent research has reinforced this concern. A 2025 systematic review on academic integrity found that students who substitute AI engagement for direct interaction with course content risk weakening foundational skills in reading comprehension, analysis, and independent reasoning, even as AI use enhances educational engagement and efficiency in other respects (Bittle & El-Gayar, 2025). A 2025 study, also published in the British Journal of Educational Technology, examined the effect of generative AI on authentic assessments and found that AI use can compromise the integrity of higher education assessments. The authors raised broader questions about the nature of knowledge and learning when AI-generated content is widely available (Kofinas, Tsay, & Pike, 2025).
A further consideration, frequently overlooked in discussions of AI capability, is alignment. AI systems do not know which framework a specific instructor expects students to apply, which chapters were emphasized in lecture, or which interpretive lens is preferred within a given discipline. AI is optimized to produce plausible output. Course success requires alignment with specific learning outcomes defined by faculty, and these are not equivalent objectives.
The Role of Verified Course Materials
Instructor-selected course materials occupy a distinct position in the learning ecosystem. Textbooks, courseware platforms, lab manuals, and digital learning resources are produced through editorial review, subject-matter expertise, and alignment with established curricular standards. They are not generated probabilistically. They are authored, reviewed, and revised by professionals accountable for accuracy.
Course materials also reflect intentional pedagogical sequencing. Concepts are introduced, reinforced, and assessed in a deliberate order. Faculty select specific texts because they align with course objectives, accreditation requirements, and the cognitive progression students need to develop mastery. No general-purpose AI system has access to this institutional context.
This alignment is particularly important at the assessment level. Students preparing for exams, clinical evaluations, licensure tests, or capstone projects benefit from materials that mirror the content and standards on which they will be evaluated. AI can help interpret and reinforce that material, but the underlying foundation comes from sources designed for the course itself.
A Complementary Model for Student Success
The most constructive framing emerging from current research is one of complementarity. A 2025 quasi-experimental study published in Forum for Linguistic Studies compared first-year university students receiving traditional writing instruction with those using an AI-augmented pedagogy. In the latter group, generative AI was used for lower-order tasks while students focused on analysis, evaluation, and reflection. The AI-augmented group demonstrated significantly greater gains on standardized critical thinking assessments. The authors concluded that when generative AI is integrated through deliberate scaffolding, it can enhance rather than hinder higher-order thinking (Hong, Vate-U-Lan, & Viriyavejakul, 2025). The critical variable, in other words, is not whether students use AI but how they use it.
Verified course materials provide the authoritative foundation: vetted content, structured progression, and alignment with faculty expectations. Generative AI offers flexible engagement: explanation, practice, summarization, and dialogue. Students who use AI to interrogate their course materials, asking it to clarify a passage, generate practice problems on a specific chapter, or quiz them before an exam, engage more deeply with the underlying content rather than less.
This model also addresses a central concern in the academic integrity literature. When AI is used to engage with verified materials rather than to bypass them, it functions as a study aid rather than a shortcut. The learning still occurs. It is supported, not displaced.
Implications and Outlook
Higher education is entering a period in which information is abundant, but verification is comparatively scarce. In this environment, the value of materials that have been deliberately selected, reviewed, and aligned with academic standards arguably increases rather than decreases. Faculty expertise, institutional standards, and curated course resources continue to provide a foundation that complements emerging tools.
AI will continue to evolve, and its role in education will continue to expand. Based on the research synthesized above, the students best positioned to benefit from these developments appear to be those who treat AI as an instrument for engaging more deeply with their coursework rather than as a substitute for it. For colleges, universities, and the campus stores that support them, the implication is consistent across the literature: students should be equipped with the verified materials their courses require, alongside the literacy to use emerging tools responsibly.
This article was prepared by Textbook Brokers as part of an ongoing effort to share research-informed perspectives on issues affecting students and higher education. The synthesis and conclusions presented reflect Textbook Brokers' interpretation of the cited research. The cited studies do not themselves prescribe specific course material policies.
References
Bittle, K., & El-Gayar, O. (2025). Generative AI and academic integrity in higher education: A systematic review and research agenda. Information, 16(4), 296. https://doi.org/10.3390/info16040296
Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., Shen, Y., Li, X., & Gašević, D. (2024). Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. British Journal of Educational Technology, 56(2), 489–530. https://doi.org/10.1111/bjet.13544
Hon, K. L. (2026). Generative AI in higher education: A systematic review of its effects on learning outcomes and academic performance. Journal of Educational Technology Systems, 54(3), 537–560. https://doi.org/10.1177/00472395251400089
Hong, H., Vate-U-Lan, P., & Viriyavejakul, C. (2025). Cognitive offload instruction with generative AI: A quasi-experimental study on critical thinking gains in English writing. Forum for Linguistic Studies, 7(7), 325–334. https://doi.org/10.30564/fls.v7i7.10072
Kofinas, A. K., Tsay, C. H.-H., & Pike, D. (2025). The impact of generative AI on academic integrity of authentic assessments within a higher education context. British Journal of Educational Technology, 56(6), 2522–2549. https://doi.org/10.1111/bjet.13585
Wang, S., Wang, F., Zhu, Z., Wang, J., Tran, T., & Du, Z. (2024). Artificial intelligence in education: A systematic literature review. Expert Systems with Applications, 252, 124167. https://doi.org/10.1016/j.eswa.2024.124167

