Training students’ AI-literacy in a pharmacy bachelor course

04 February 2026

Educational project

Training students’ AI-literacy in a pharmacy bachelor course

This project addresses the need to develop AI-literacy among first-year pharmacy students. Research indicates that pedagogically scaffolded tasks that combine AI-use and structured reflection on AI-output can enhance students’ AI-literacy. Therefore, this project integrates such tasks into a larger assignment for four weeks. The tasks target the three domains of AI-literacy (understanding, practical application and critical appraisal). The impact of the teaching activities is evaluated with multiple-choice tests on AI-literacy (before and after the intervention), attendance lists, and student interviews.

Background information

In a first-year pharmacy course, students complete a project that results in a written report and oral presentation. For this project, each student evaluates the validity of a health-related claim using three peer-reviewed papers by choice, and thus learns to critically assess and compare study findings. Students may use AI for targeted content generation for subtasks, but fully AI-generated work is forbidden. As AI-literacy is not yet taught to these students, uncritical and non-transparent AI use may occur. Hence, this project introduces several teaching activities in the course to increase AI-literacy.

Aims

This project aims to answer the following research questions:

  • What is the effect of the implementation of weekly tasks on the AI-literacy of students in a first-year pharmacy course?
  • To what extent did other factors than the weekly task influence the students’ AI-literacy?

Project description

The AI-literacy teaching activities are embedded within the larger assignment over four weeks. The tasks target the three AI-literacy competencies: technical understanding of AI concepts, practical application, and critical appraisal of AI-generated content. For example, students ask ChatGPT for reflective feedback on their research questions. To answer the first question, a survey of AI-literacy on all three competence areas will be administered before and after the intervention. To answer the second question, three potential co-factors (the extent of AI-use, the extent of task compliance and the extent of meeting attendance) will be measured and included in the analysis. These data will be complemented with an additional student group interview.

References

  • Laupichler, M. C., Aster, A., Haverkamp, N., & Raupach, T. (2023). Development of the scale for the assessment of non-experts’ AI literacy: An exploratory factor analysis. Computers in Human Behavior Reports, 12, Article 100338. https://doi.org/10.1016/j.chbr.2023.100338
  • Styve, A., Virkki, O. T., & Naeem, U. (2024). Developing critical thinking practices interwoven with generative AI usage in an introductory programming course. In Proceedings of the 2024 IEEE Global Engineering Education Conference (EDUCON). https://doi.org/10.1109/EDUCON60312.2024.10578746
  • Rana, V., Verhoeven, B., & Sharma, M. (2025). Generative AI in design thinking pedagogy: Enhancing creativity, critical thinking, and ethical reasoning in higher education. Journal of University Teaching and Learning Practice, 22(4). https://doi.org/10.53761/tjse2f36
  • Jin, Y., Martinez-Maldonado, R., Gašević, D., & Yan, L. (2024). GLAT: The generative AI-literacy assessment test. arXivhttps://arxiv.org/abs/2411.00283
  • Hornberger, M., Bewersdorff, A., & Nerdel, C. (2023). What do university students know about artificial intelligence? Development and validation of an AI-literacy test. Computers and Education: Artificial Intelligence, 5, Article 100165. https://doi.org/10.1016/j.caeai.2023.100165
Print

You are free to share and adapt, if you give appropriate credit and use it non-commercially. More on Creative Commons

 

Are you looking for funding to innovate your education? Check our funding calender!